Data Creep in Schools and Daycares in Waterfront Toronto’s Quayside? Where’s the alarm?

Teens in class room

Open letter to Waterfront Toronto, City of Toronto Council, Mayor John Tory, Minister of Education Stephen Lecce, and the Premier of Ontario, Doug Ford, on the implications of “Data Creep in Schools and Daycares in Waterfront Toronto’s Quayside.”

The just released Quayside Discussion Guide, produced for Waterfront Toronto’s MIDP Evaluation Consultation, February 2020, Round 2, has one very troubling “solution” listed in the Complete Communities and Inclusivity section:

Waterfront TO’s categorizes the integration of a “public elementary school and childcare facility” in Quayside as a solution it supports if there is government support:

Waterfront Toronto’s failure to recognize the potential for the violation of children’s data privacy in these two physical domains, digital AND physical, is alarming.

First. Currently, under the Ontario Education Act, publicly funded schools are not considered spaces “that are open to the public“, ie. public spaces. The question of whether schools are public places was raised before the Human Rights Commission in Fall 2017 in regards to Kenner Fee, an autistic boy who hoped to have his service dog in the classroom. The Waterloo Board’s lawyer, Nadya Tymochenko, stated, “The school is not a public space,” and “The classrooms in a school are not publicly accessible.’

“Our legislation recognizes the need to secure the physical safety of our children and restrict public access as to anyone entering a school. Period. Why data collection broadly framed here would be permissible, is a mystery. If data is strictly to do with utilities and infrastructure, water, electricity, temperature, that seems feasible and valuable. Any data collection beyond that opens up the potential for surveillance creep for our most vulnerable residents. That data here is undefined is not acceptable.” (Tymochenko.)

As to the casual inclusion of child care facilities, more alarms sound. If childcare facilities are privately funded, will this be an opt in option for private businesses that serve children? That’s leaving aside data privacy precarity again, given Google’s history of collecting of children’s personal information.

Daycare
Daycare with toys and children. Photo Credit: BBC Creative on Unsplash

As I have noted elsewhere, there is no logical basis to trust that Sidewalk Labs will consistently adhere to whatever regulations are in effect. The lack of recognition in the Waterfront Toronto Quayside Discussion Guide as to the vulnerability of minors leaves open the potential for what Rob Kitchin has termed the phenomenon of “control creep.”

Kitchin’s work has documented how Smart City infrastructures “are promoted as providing enhanced and more efficient and effective city services, ensuring safety and security, and providing resilience to economic and environmental shocks, but they also seriously infringe upon citizen’s privacy and are being used to profile and socially sort people, enact forms of anticipatory governance, and enable control creep, that is re-appropriation for uses beyond their initial design” (2015, italics mine).

These concerns as to whether Alphabet subsidiary companies will rigorously respect data privacy and forego data tracking continue to be significant given the new Feb. 20, 2020 charges brought against Google by the Attorney General of New Mexico, Hector Balderas, that Google is collecting the data of minors via its suite of ed-tech apps and services, Chromebooks, G-Suite, Gmail, and Google Docs. If proven, this will be the second time Google has knowingly collected children’s data via its ed-tech, in violation of COPPA, the Children’s Online Privacy Protection Act. (See other violations as to collecting children’s data). Although Google has now committed to a phasing out of third-party cookies that enable data tracking by 2022, Google’s “Privacy Sandbox” regulations will not stop its own data collection.

We should be very concerned as to the scope and scale to which Google has already colonized our children’s futures, via its dominance in the ed-tech space, the entertainment space (Youtube Kids), and the really unfathomable extent of its dynamic, persistent, digital profiling of users’ organic online behaviour.

What possible options do we have to counter “data creep”?

First, remove this “solution” from the existing agreement until we have better protections for minors in Canada, which are inadequate.

Second, look to the two significant regulations now impacting Google, Youtube Kids, and tech platforms that serve child-directed content.

The first is a Nov. 22, 2019 FTC requirement directed to Youtube and YouTube Kids that all content “directed to children” be tagged as such, that viewers of that content cannot be tracked with persistent identifiers, and that all other COPPA regulations must be met. This requirement effectively requires YouTube Kids to self-regulate as to proper compliance of the users of its platforms and content creators globally are “scrambling” as to how to avoid possible violations and financial penalties.

The second is the new UK “Age Appropriate Design Code” brought forward by the  Information Commissioner’s Office that applies to all digital media companies and platforms and requires that harmful content be blocked from minors. Let me quote in full:

“There are laws to protect children in the real world. We need our laws to protect children in the digital world too.’– UK Information Commissioner

Today the Information Commissioner’s Office has published its final Age Appropriate Design Code – a set of 15 standards that online services should meet to protect children’s privacy.

The code sets out the standards expected of those responsible for designing, developing or providing online services like apps, connected toys, social media platforms, online games, educational websites and streaming services. It covers services likely to be accessed by children and which process their data.

The code will require digital services to automatically provide children with a built-in baseline of data protection whenever they download a new app, game or visit a website.

That means privacy settings should be set to high by default and nudge techniques should not be used to encourage children to weaken their settings. Location settings that allow the world to see where a child is, should also be switched off by default. Data collection and sharing should be minimized and profiling that can allow children to be served up targeted content should be switched off by default too.” (Jan. 22, 2020.)

We do not have this degree of data protection for minors in Canada, let alone adults. We should be vigilant as to not simply granting access to children’s data as a bullet point “solution” without any regard or attention to what that could mean in the future. We should be demanding regulation at the federal level that can impose significant and meaningful financial penalties and operational restrictions for all violations of children’s data privacy.

As I have said before, if we can’t effectively protect children’s data privacy, we should assume that data privacy for 13+ is functionally non-existent. Every adult living today who has spent time online has a dynamic, persistent, constantly updating targetable profile. Do we want this for our children? As adults and parents, we need to demand much more rigorous and punitive regulations, because if we don’t, it won’t happen and there will be no limits to “data creep.” In the US and the UK, outcry and pressure from parents, the media, and children’s privacy advocates, such as The Campaign for a Commercial-Free Childhood, are producing results. We need similar activism in Canada.

See my earlier post, “We Street Proof Our Kids. Why Aren’t We Data-Proofing Them?“, originally published on The Conversation.

Top Photo Credit: Neonbrand on Unsplash

SidewalkTO and My Warning for Waterfront Toronto on an Open Door to Data in the MIDP ‘Realignment’ Summary

Computer code on screen

This morning I attended the Waterfront Toronto Board of Directors Meeting and while the initial summary of ‘realignments’ from the original Sidewalk Labs / SidewalkTO Master Innovation and Development Plan (MIDP) seemed a positive move forward, one point in the summary handout is deeply problematic.

my tweet from this am on first reading of the summary of ‘realignments’ from the original Sidewalk Labs / Sidewalk TO Master Innovation and Development Plan (MIDP).

Dark Pattern Designs

The underlined text is a clear example of “dark pattern” design – in this instance, language that obfuscates and/or manipulates attention and perception to the advantage of the corporate interest. Here ‘commercially reasonable efforts’ defers an ethical treatment of data to established, current practices in the commercial sector. The legal loopholes and exceptions this phrasing enables means you might as well say, whatever we can get away with legally, as per our Terms of Service, we will.

Let me give examples from a current *live* Privacy Policy that demonstrate how much personal and non-personal data is legally collected and used. The following sections are from Calm, the #1 Sleep app, Apple’s Best of 2018 Award Winner, Apple’s 2017 App of the Year, and ‘The Happiest App in the World, according to the Center for Humane Technology. Most striking (see below), is how user data is clearly stated to be a ‘business asset’ that can be disclosed or transferred in the event of a bankruptcy.

You can read the full privacy statement here, (downloaded it’s a 22 page PDF). Note that you have to link to the statement from the Terms of Service page, a deliberate second step designed to deter users from reading the privacy policy.

Data Collection

Note below the range of data collected automatically and that none of this data is ‘personal information.’

Automatic Data Collection. Calm.com Privacy Policy

In this section, ” commercially reasonable” includes accessing personal information from other sources:

Note below the extensive collection of non-personal data: device identifier, user settings, location information, mobile carrier, and operating system of your device.

Anonymized Data

Note below how anonymized personal Information is aggregated, encompassing de-identified demographic data and de-identified location information, for further use. As such, “Anonymized and aggregated information is not Personal Information, and we may use such information in a number of ways…”

The security of anonymized data is tenuous, as researchers at different UK universities in July 2019 “published a method they say is able to correctly re-identify 99.98% of individuals in anonymized data sets with just 15 demographic attributes.”

“Commercially Reasonable” & SidewalkTO

All of the above data collection and data use is “commercially reasonable.” The second major flag in the continue of the sentence I underlined is the “process[ing] of non-personal data.” As a data category, this functionally includes anything / everything that is not personal, from web-browsing and search history included, to any other online activity, cross device and cross platform that you engage in.

Suffice to say, this particular phrasing gives Sidewalk Labs and SidewalkTO a firehose of data to analyze and add to pre-existing user activity digital profiles, which we all have as Google/YouTube ad targets. Waterfront Toronto should be absolutely concerned as to what this statement legally allows. I find it laughable as to any assurance of data privacy protection.

If you haven’t read my prior posts on data privacy and children, a demographic more heavily regulated than adults, you can read these here:

“Data Creep in Schools and Daycares in Waterfront Toronto’s Quayside? Where’s the Alarm?” March 9, 2020.

“We street-proof our kids. Why aren’t we data proofing them?” Sept. 29, 2019

Can We Trust Alphabet & Sidewalk Toronto with Children’s Data? Past Violations Say No. June 6, 2019.

“Protecting children’s data privacy in the smart city.” May 15, 2019

Can We Trust Alphabet & Sidewalk Toronto with Children’s Data? Past Violations Say No.

Tweet capture of my deputation before the Executive Committee

I spoke today before the City of Toronto Executive Committee on the update to Quayside, and the proposed Master Innovation and Development Plan from Sidewalk Toronto. The full text of my statement on the question of “Can We Trust Alphabet & Sidewalk Toronto with Children’s Data?” is below, though my public deputation was slightly shorter. You can watch my deputation here, starting at 2:55:38. The text is below:

Deputation to City of Toronto Executive Council

Good afternoon and thank you for the opportunity to speak before you today. What I will speak to is a small segment of a larger academic study examining how big tech and entertainment conglomerates are handing children’s data and my paper on Big Data, Disney, and the Future of Children’s Entertainment was published yesterday.

To clarify – to speak to Councillor Fletcher’s question, in Canada and the US children under 13 are deemed to be minors, and cannot give consent, hence terms of use requiring parental consent on most websites. In the EU, with the enforcing of the General Data Protection Regulation (GDPR) in May 2018, all but two countries raised the age of consent to 16. the Office of the Privacy Commissioner of Canada (OPC) recognizes children as vulnerable and deserving of special considerations: they cannot make informed decisions as to what they are agreeing to. We do not have adequate legislation in Canada to regulate today’s data collection practices, generating pseudonymized consumer profiles via cross-browser fingerprinting and other methods.

illustration of Quayside from Sidewalk  Toronto
Do you see children in this illustration from Sidewalk Labs? I do.

My findings on Alphabet’s subsidiary companies are alarming, well-documented internationally, and raise serious questions as to whether we can trust a big tech company to self-regulate. Alphabet’s subsidiary companies, Google, YouTube, and Google Play, have an established pattern of violating children’s data privacy due to variously: 

  • Broadly, an (over) reliance on AI to serve ads and content recommendations;
  • a lack of human oversight on app developer practices in the Google Play store; 
  • a lack of human oversight on YouTube resulting in pedophile comments on child posted videos, documented in major media coverage in 2017 and again in 2019; 
  • an overreach as to data collection of minors and teens via Google Chromebooks introduced in American schools in 2017 whereby account holders had to opt-out of data collection.

Let me detail two instances further:

  1. A 2018 academic study, “Won’t Somebody Think of the Children?: Examining COPPA Compliance at Scale,” published in the Proceedings on Privacy Enhancing Technologies, found that “thousands of Android apps potentially violated the Children’s Online Privacy Protection Act or COPPA in the US. “The study examined  “5,885 child-directed Android apps from the US Play Store, which are included in Google’s Designed for Families programme, and found that “Overall, roughly 57% of the 5,855 child-directed apps that we analysed are potentially violating Coppa.” A complaint from the Campaign for a Commercial Free Childhood to the FTC in the US expanded on how the Google Play Store apps were marketing to children and in turn, violating children’s privacy.
  2. James Bridle’s 2017 essay “Something is Wrong on the Internet” launched a media storm of concern as to the lack of regulation for child-directed bot-generated videos on YouTube Kids, thousands of which offered disturbingly violent, copyright-violating content. In April 2018, YouTube Kids finally launched “new features that allowed parents to create a white-listed, non-algorithmic version of its Kids app,” after months of parent and consumer advocacy groups demanding this function.

The consistent documented pattern across Alphabet’s companies is a failure to enforce secure data privacy for children under 13 until an external organization calls attention to violations. Why is this important for Quayside? Sidewalk Labs is a sister company to three of Alphabet’s subsidiaries, all of whom have failed to meet compliance requirements (more than once) with repeated international outcry, so there is no basis to expect that Sidewalk TO will be any more reliable as to protecting or respecting the privacy of minors. 

As John Thackera stated, Trust is not an algorithm. So, can we trust companies who trust in algorithms? Based on existing documentation, we should not assume we can trust Alphabet’s Sidewalk Toronto to consistently respect the data privacy of our most vulnerable citizens, as sister companies have not in the past. Currently, so called “urban data” gathered in public spaces will scoop the data of minors and treat it as adult data, unless protections are clearly designed and executed. Clarity as to how we can ensure the consistent protection of the data privacy of children and youth must be central to our discussions of technology globally and to Justin Trudeau’s proposed Digital Charter in Canada. It behooves us to be very circumspect as to trusting Alphabet’s Sidewalk Toronto with our children’s data.

Note: The New York Times published a report, “On YouTube’s Digital Playground, an Open Gate for Pedophiles,” on Monday June 3, 2019, that AGAIN, YouTube’s algorithms are pushing child-created content to pedophiles, resulting in mass *swarm* activity in views and on the comments. The instances I referred to were from 2017 and February 2019.

See other posts on Sidewalk Toronto

“Data Creep in Schools and Daycares in Waterfront Toronto’s Quayside? Where’s the Alarm?” March 9, 2020.

“We street-proof our kids. Why aren’t we data proofing them?” Sept. 29, 2019

“Protecting children’s data privacy in the smart city.” May 15, 2019

E-Mote AI

ESAA. Powered by E-Mote AI

A Speculative Exploration of Generative AI, Artificial Intimacy, Artificial Unintelligence, and the Uncanny.

Siobhan O’Flynn @copyright2024. Project website: E-Mote AI

Research Question:

E-Mote AI asks: What might be the effects (social, political, cultural) of the uncritical and unregulated adoption of generative AI Chatbots, assistants, and companions?

How might creating a fictional start-up company in this sector provoke questions, discussions, and critical engagement with the many concerns and benefits as they can be identified?

ESAA Gen 3. E-Mote AI 2023

E-Mote AI: An Exploration of Generative AI, Artificial Intimacy, Artificial Unintelligence, and the Uncanny is a transmedia project currently in beta utilizing the methodologies of speculative futures and critical design to explore the emergent logics, industry practices, and implications of the rush to bring to market AI mental wellness apps, assistants, companions. This project situates today’s chatbots in a continuum of automata such as the 18th century automaton, the Mechanical Turk (Ashford, 2017)) and responses to automata & AI via Joseph Weizenbaum’s caution on the “powerful delusional thinking in quite normal people” he observed in responses to Doctor, the Eliza Chatbot (Weizenbaum, 1967, p.7). The website for E-Mote AI is designed to simulate a mental wellness start-up launching customizable and personalized AI ChatBots for education, industry & HR. The text uses the marketing copy style churned out by ChatGPT to replicate the aspirational claims common in this sector, here generated by me through multiple revisions and curations of my prompts. The AI generated text cascades in euphorically hollow metaphors and stylistic flourishes, absent of any evidence in peer-reviewed medical studies or vetted journals. The project  is designed for a general audience and not specialists in fields such as STS, critical code studies, digital humanities, or digital media studies.

E-Mote AI targets sectors rapidly shifting to AI as more efficient and less costly than “humane work” and “connective labor” (Pugh, 2022), with two iterations of ESAA, the Employee Sentiment Analysis Assistant and ESAA, the Empathetic Student Anxiety Assistant. The website includes simulated video avatars, generated through cross-posting output between Midjourney and LivingAI when it was available on ChatGPT4. The copy for the videos was scripted by me.

My intention is for users to experience E-Mote AI as a provocation to what is now becoming our status quo, now with the race for data for Large Language Models (LLMs) from OpenAI to Siri. The concept and design draw on methodologies from speculative critical design (Bratton, 2016; Dunne and Raby, 2013; Haraway, 2011; Jain, 2019; Candy and Watson, 2013) in order to simulate an encounter with AI Chatbots that invite intimacy that are designed to quell uneasiness. while simultaneously (hopefully) raising uneasiness. Dunne and Raby set out the value of critical design in their work Speculative Everything, in a passage worth quoting in full:

Design as critique can do many things—pose questions, encourage thought, expose assumptions, provoke action, spark debate, raise awareness, offer new perspectives, and inspire. And even to entertain in an intellectual sort of way. But what is excellence in critical design? Is it subtlety, originality of topic, the handling of a question? Or something more functional such as its impact or its power to make people think? Should it even be measured or evaluated? It’s not a science after all and does not claim to be the best or most effective way of raising issues.

Critical design might borrow heavily from art’s methods and approaches but that is it. We expect art to be shocking and extreme. Critical design needs to be closer to the everyday; that’s where its power to disturb lies. A critical design should be demanding, challenging, and if it is going to raise awareness, do so for issues that are not already well known. Safe ideas will not linger in people’s minds or challenge prevailing views but if it is too weird, it will be dismissed as art, and if too normal, it will be effortlessly assimilated. If it is labeled as art it is easier to deal with but if it remains design, it is more disturbing; it suggests that the everyday life as we know it could be different, that things could change.

For us, a key feature is how well it simultaneously sits in this world, the here-and-now, while belonging to another yet-to-exist one. It proposes an alternative that through its lack of fit with this world offers a critique by asking, “why not?” If it sits too comfortably in one or the other it fails. That is why for us, critical designs need to be made physical. (2012, p. 43)

E-Mote AI is meant to “[sit] in this world, the here-and-now,” of today’s online environment and the AI tools and services that obfuscate the data capture propelled by for-profit goals. The irony of the design of E-Mote AI is that the advances in AI and acceleration of adoption now mean that this project is a present reality, though not yet widely understood, as the potential ramifications are unrecognized and often ill-defined.

Rather than aiming for a seamlessly palatable experience, my hope is that the repetitive hyperbole of the grandiose style paired with the occasional glitches can raise user concerns, drawing attention to cracks in the superficiality of understanding of AI Chatbots. Further, I hope that the frustrations experienced in encounters with the bespoke Poe Bot will serve as reminders of the lack of interpretability of the computational processes determining the output, while also providing indicators to gauge the limits and guardrails present in the coding, trackable in what is and isn’t recognized, and the algorithmic biases that may appear. Where the Wachowskis’ 1999 film, The Matrix, visualized this as freezes and glitches in the VR simulation, and the dramatic green cascades of computer code, today’s encounters materialize as friendly assistants on our devices, in our infrastructures, networks, and any system that is functionally reliant on algorithmic processes and decisions. 

I choose this sector of AI development to highlight the logics of existing AI Chatbots (Pi, Replika, Woebot, Poe, KintsugiHealth), and services and as a provocation to encourage discussion, examination, and optimally, regulation, of what Zuboff has termed “the instrumentalization of data” (2019) now being mined from personal, intimate, and therapeutic realms. The rush to capitalize this new frontier of data is significantly unregulated, largely operating outside of the FDA in the US, and current and proposed AIDA regulation in Canada. Companies such as such as Calm and Woebot (apps), KintsugiHealth and Supermanage (sentiment analysis via HR & Slack respectively) all bypass the much more expensive, time consuming and heavily regulated requirements for mental health apps and services.

KintsugiHealth is of particular concern in that the AI analyzes audio biomarkers at increments too fine for human perception to enable early interventions pre-crisis, self-designate as mental wellness services. As Zuboff has warned, these new industries of data capture and knowledge production are fundamentally political: “The result is that these new knowledge territories become the subject of political conflict. The first conflict is over the distribution of knowledge: “Who knows?” The second is about authority: “Who decides who knows?” The third is about power: “Who decides who decides who knows?” (Naughton, 2019).

The proliferation of intimate data harvesting technologies such as these should be framed as a rights issue ensuring the user / client /employee’s agency to decide whether to opt-in or not as to data collection and processing. Further, full transparency and accountability as to data use, third party sharing, internal and external data audits, must be in place as we are entering into a period of mass scale social experiment run by for-profits without ethical oversight.

E-Mote AI is designed to consider the short and long-term implications of algorithmic therapists and companions, drawing on insights from Dr. Esther Perel has warned of the socio-cultural dangers of the “other AI in artificial intimacy” (TEDTalk, 2023). The intention of this project is not just to prompt despair and apathy at a rapidly approaching dystopian near-future, instead my goal is to emphasize our capacity to choose differently. Dunne and Raby outline the importance of “critical design” as an optimistic intervention towards better futures, writing: “All good critical design offers an alternative to how things are. It is the gap between reality as we know it and the different idea of reality referred to in the critical design proposal that creates the space for discussion. It depends on dialectical opposition between fiction and reality to have an effect. Critical design uses commentary but it is only one layer of many” (2013, p. 35). E-Mote AI embodies the “shiny thing” effect characteristic of generative AI tools and services as the wonder machines of our age. Hopefully, users are not lulled, wooed, and distracted, and instead might pause to consider the implications and questions raised by today’s cyber automata.

ESAA Gen 11 BDX. E-Mote AI 2024


Works Cited

Ashford, D. (2017). The Mechanical Turk: Enduring Misapprehensions Concerning Artificial Intelligence. The Cambridge Quarterly46(2), 119-139. 

https://doi.org/10.1093/camqtly/bfx005

Bratton, B. H. (2016). On Speculative Design. Dis Magazine. February. 

http://dismagazine.com/discussion/81971/on-speculative-design-benjamin-h-bratton/. Accessed 10 August 2019. 

Broussard, M. (2018). Artificial Unintelligence: How Computers Misunderstand the World. MIT Press.

Candy, S., and Watson, J. (2013-ongoing). The Situation Lab. https://situationlab.org/. 

Accessed 2013. 

Calm. Accessed 2017. https://www.calm.com/

Dunne, A., and  Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press.

Haraway, D. (2011) Speculative Fabulations for Technoculture’s Generations: Taking Care of Unexpected Country. Australian Humanities Review, no. 50, 2011. 

https://australianhumanitiesreview.org/2011/05/01/speculative-fabulations-for-technoculturesgenerations-taking-care-of-unexpected-country/.

Accessed 12 June 2013. 

Inflection AI, Inc. (2023). Pi.AI https://pi.ai/talk Accessed May 5 2023.

Jain, A. (2019). Calling for a more-than-human politics: A field guide that can help us move towards the practice of a more-than-human politics. Superflux

Accessed 17 September 2021. 

Kintsugi Mindful Wellness, Inc. (2022). Kintsugi. https://www.kintsugihealth.com/ Accessed November 12, 2022.

Loveless, N. (2019). How to Make Art at the End of the World: A Manifesto for Research- Creation. Duke University Press.

Naughton, J. (2019). The Goal is to Automate Us: Welcome to the Age of Surveillance Capitalism. The Guardian. Sunday 20 January. https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capitalism-google-facebook Accessed January 20 2019.

Perel, E. (2023). Esther Perel on The Other AI: Artificial Intimacy | SXSW 2023. March 31.

ttps://youtu.be/vSF-Al45hQU?si=WFvJOy7pnzWu0x5a Accessed April 9, 2023.

Poe…COMPLETE

Pugh, A.J. (2022). Constructing What Counts as Human Work: Enigma, Emotion, and Error in Connective Labor. American Behavioural Scientist. Vol. 67.14. Accessed Nov. 18, 2023.

https://doi.org/10.1177/00027642221127240

Replika…COMPLETE

Supermanage….COMPLETE

Wachowski, L., & Wachowski, L. (1999). The Matrix. Warner Bros.

Weizenbaum, J. (1967). Contextual understanding by computers. Communications of the ACM, 10(8). http://www.cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1967.pdf

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation.

Woebot Health (2017). Woebot. https://woebothealth.com/ Accessed Oct. 14 2022.

Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.

Children Not Profits

Big Tech Accountability and What We Can Do About It

I was invited to give a talk, “Children Not Profits,” to parents, students, and educators at the Dr. Denison High School in Newmarket in April 2024 on the dangers of our current online environment for minors and what parents and caregivers can do to demand effective regulation from our governments. I’m sharing this at the start of the school year while many parents and caregivers will be wrestling with the new cellphone bans. It may be helpful to talk with teens about the power asymmetry that is integral to the design of social media, apps, and “very large online platforms” (the term now used in the EU).

My slides for the Dr. Denison High School talk.

In brief, we interact unknowingly with interfaces and user experiences that are addictive by design and which leverage highly sophisticated manipulation techniques, the latter now studied as dark patterns or dark deceptions. that trick you into intended actions. In addition, if you see sponsored, recommended, or “push” content in your apps and on webpages, you’re seeing the end product determined by your dynamic digital profile as targeted by thousands of data brokers selling your personal data to third parties. This 2018 keynote by Dr. Johnny Ryan speaking to European broadcasters at EGTA CEO’s Summit in Madrid is an excellent primer on ad-targeting and note that this is 6 years old.

In the discussions I’ve had with teens and parents, no one is aware of the degree of this power imbalance and why should we be? Given that very large online platforms are amongst the wealthiest companies globally, the money available for ongoing research, testing, and refinement of these techniques means that we are unprotected, constantly monitored, and unable to detect or respond to the constant updating of these systems’ features. How can any parent or teen stay abreast of what operates as design features “to enhance your experience”?

The talk I gave in the spring detailed the current state (spring 2024) of the online world as to data tracking, digital profiling, and content targeting, monitoring search, keystrokes, click-throughs, time spent and more. We don’t think about the scope or scale of this data sharing infrastructure when we see adds pop-up on a web page. And in years of asking students and audiences, who here reads the Terms of Service, Terms of Use, or Privacy Policies, a very small number have said yes. I’ve also asked students in my Data Privacy in Canada course, how many use fake birthdays to create social media accounts before they turned 13. The answer is almost always unanimous with many sharing that they did so as early as age 11 and 10.

Until more recently, the minimum age of 13 for social media platforms was unquestioned as a de facto age-gate across online platforms. This began to change with whistleblower Frances Haugen’s disclosure of thousands of internal Facebook documents to the Wall Street Journal, detailing the tech company’s prioritizing of profit before public good and the company’s awareness of known harms to tween and teen girls. The WSJ series published details in Sept. 2021 in the series, the Facebook Files and Haugen then testified before the Senate Commerce, Science and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security on Capitol Hill, in October 2021.

Haugen’s revelations arguably were the catalyst for bi-partisan concern as to the harms minors were experiencing online, in what in 2021 was a largely unregulated for-profit environment, with the exception of California’s more stringent laws introduced in 2020 with the California Consumer Privacy Act (CCPA). Many states have since passed or proposed more restrictive laws protecting minors.

Consider that the age-gate of 13 has meant that for more than a decade minors have had their data profiled and processed exactly as if they were adult accounts. In 2024, the degree of harms is now well-documented. In April 2022, the nonprofit organization, FairplayForKids.org published Designing for Disorder: Instagram’s Pro-Eating Disorder Bubble.

In December 2022, The Center for Countering Digital Hate published Deadly by Design: TikTok pushes harmful content promoting eating disorders and self-harm into young users’ feeds. Many more reports can be found online and some are referenced in my presentation.

More recently, alarms are being raised because of the global explosion in the sextortion of boys, minors to young adults. On February 6 2024, Canada’s RCMP released a statement, “Sextortion — a public safety crisis affecting our youth” reporting that “According to Cybertip.ca, Canada’s tip line, 91% of sextortion incidents affected boys.” On April 29 2024, just after I gave this talk, the UK’s National Crime Agency issued an unprecedented warning on the rise in sextortion for boys:

“All age groups and genders are being targeted, but a large proportion of cases have involved male victims aged between 14-18. Ninety one per cent of victims in UK sextortion cases dealt with by the Internet Watch Foundation in 2023 were male.”

Cybertip.ca

The good news is that for the first time since I started researching and documenting online harms to minors in 2017, there is now a much wider understanding of the degree and nature of harms, and the mechanisms that contribute to those harms.

We had a fantastic Q & A with questions from students, the first being: “Do you think social media has decreased attention spans?”

I flipped the question and asked the student, “Do you think social media has decreased your attention span?” and they answered emphatically, “YES” and we all laughed.

The second question from a student was, “What do you think of the TikTok ban?”

My response was that I was less concerned with TikTok, in particular, in the context of the standard for-profit drivers of tech companies. I did point to the differences between TikTok and Douyin, its “sister company” in China, as the latter has significantly stricter content restrictions in “teenager mode.”

In 2018, “Douyin introduced in-app parental controls, banned underage users from appearing in livestreams, and released a “teenager mode” that only shows whitelisted content, much like YouTube Kids. In 2019, Douyin limited users in teenager mode to 40 minutes per day, accessible only between the hours of 6 a.m. and 10 p.m. Then, in 2021, it made the use of teenager mode mandatory for users under 14” (Source: MIT Technology Review 2023).

Rather than focusing on the security risks, I suggested they think about these two very different use models and ask why the Chinese parent company, Bytedance, maintains much more restrictive limits on content for Chinese youth?

We also talked about Jonathan Haidt’s four tips for parents in The Anxious Generation:  How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness:

  1. No smartphones before high school.
  2. No social media before 16.
  3. Phone-free schools through collective action.
  4. More independence and agency in the “real” world.

Haidt’s tips are being much debated and how these might be enacted will depend on parents, communities, and schools as, without question, there are kids who might depend on smartphones for specialized learning, as one counter example.

Jonathan Haidt on Smartphones vs. Smart Kids

What I would add though is the need for an intervention in the system of “surveillance capitalism” (Shoshana Zuboff, 2019), and push for an absolute restriction against the digital profiling or ad-targeting of minors, with anyone under 16 being fully removed from the systems of data capture. Teens, parents, caregivers, and educators will always be playing catch-up to constant up-dates, changing terms, and new features rolled up by tech companies. As such, again, the burden of responsibility can’t be on the users of these technologies as the asymmetry of power and resources is too great.

One model Canadians could look to is the EU’s Digital Services Act (DSA) which mandates “Zero tolerance on targeting ads to children and teens and on targeting ads based on sensitive data” (from the European Commission website):

“The DSA bans targeted advertisement to minors on online platforms. [Very Large Online Platforms] VLOPs have taken steps to comply with these prohibitions. For example, Snapchat, Alphabet‘s Google and YouTube, and Meta’s Instagram and Facebook no longer allow advertisers to show targeted ads to underage users. TikTok and YouTube now also set the accounts of users under 16 years old to private by default.”

As Canadian laws are substantially behind the EU and the US (proposed and passed), we should push for a rights-based approach that puts children before profits. Given that “Very Large Online Platforms” or VLOPs now have to align with the EU’s zero tolerance model to safeguard minors, there is no systems barrier to that being implemented here in Canada.

This shift will take collective, community action to pressure our governments and legislators for similar safeguards. Right now, we in Canada have a substantially better shot at achieving this goal as American legislators are also pushing for greater transparency, rights, and regulations from Big Tech.

“Children not profits.” It should be that simple.

UNICEF Policy guidance on AI for children 2021

See also my earlier posts, “We street-proof our kids. Why aren’t we data proofing them?” (October 2, 2019).

And, “Can We Trust Alphabet & Sidewalk Toronto with Children’s Data? Past Violations Say No.” (June 6, 2019).

Infinite Eddies: An Elegy

A series of GenAI images of a man and a woman

A short essay on the genesis of Infinite Eddies, a Twine Elegy, co-created with Midjourney GenAI, written for The Future of Writing 2023 Exhibition, created for The Future of Writing Symposium, The Humanities and Critical Code Studies Lab (May 1, 2023).

 Infinite Eddies (2023) explores the affective dimension of Midjourney’s generative AI mediating one source photograph of my father and myself through multiple variations. Prompts recontextualize our mediated image in other places, times, storyworlds, and visual aesthetics, yet across the set the configuration of our bodies, our expressions, remain constants.

I see this manifest recurrence as an affective element in the visual evocation of a relationship, expressed in myriad variations and mutations of proximity and arrangement. Playing with prompts to influence the computational temperature of output images, this recurrent, mutable element troubles Benjamin’s formulation of the aura of the original work of art, conjuring instead an aura of affect in social relation and physical configuration, which manifests across an expanding set of images, often simulating photographs, yet which themselves lack the authenticity of the original photograph.

The aggregate effect is of eddies or ripples around the source photo, the unseen pebble dropped in a virtual pool. The spectrum of centripetal elements and patterns and centrifugal outliers open questions as to the extent of the data set determining the temperature of generated images, the parameters defining alignment (Christian) in the “magecraft of prompting” manifest in a spectrum of results from “Maximum Obedience” to “Maximum Surprise” (Kelly).

This is a short first version of a longer essay currently underway.

Brian Christian (2020). The Alignment Problem: Machine Learning and Human Values

Kevin Kelley (2022). Picture Limitless Creativity at Your Fingertips. Wired. Nov. 17.

On JK Rowling and the Value of Listening

Lumos

I didn’t really like JK Rowling ‘s Harry Potter novels when I first started reading them. My kids consumed them. I read them as I teach children’s lit. Skimmed more like as the ‘voice’ and tone of the novels, intensified through the 5th and 6th, irritated me. I read for the magic and the plot, what happens next? how will this be resolved? what is that mysterious thing?

Along the way I heard the story of JK Rowling ‘s extraordinary generosity to a Toronto girl dying of leukemia long before it was public knowledge. That Rowling sent an encrypted outline of the remaining novels so that this girl could know how the story continued and ended before she died.

Rereading the novels in order to teach Azkaban, Rowling’s choices made more sense. That we are immersed in Harry’s frustration and isolation so that we understand the arc of his choices vs. those of Voldemort.

Fan wikis opened up the extraordinary depth of Rowling’s vision and patterning across the series and I remain impressed by her craft and imagination every time I teach her. However, I don’t think I can teach her novels anymore.

Right now, I am flummoxed by Rowling’s failure of imagination. Her hurtful comments on trans people and ‘people who menstruate’ mark the limits of her imagination and understanding. That she chooses to tweet AGAIN on the life experience of trans people when the response to similar ‘stake my ground’ tweets in December 2019 were widely criticized. Then, Rowling tweeted “support for a researcher whose views on transgender people were condemned by a court on Wednesday as “incompatible with human dignity.”

Rowling’s latest tweets are decidedly more troubling than her prior statements, as she continues to double-down. When we are challenged on our beliefs and views, we have an opportunity to stretch and flex our understanding to see beyond our conditioning, culture, and values.

Rowling’s failure to listen to voices and experience outside of her own recall for me insights that can reframe these moments that can happen to all of us.

A core recognition of value pluralism, according to philosopher Isaiah Berlin, is that values across cultures may be incompatible and incommensurable, such that there may be no equivalence, no ‘oh this X in your beliefs is like Y in mine.” When we are called out as Rowling has been, we have a choice as to whether we will pause and listen to the experience of others. We have a choice as to whether we are open to learning we may be wrong in our views. We have a choice as to whether we will respect that the experience of others may be different from our own and we can respect that diversity even if we will never understand it.

Rowling immerses us in the experience of a 15-16-17 year old boy in a way that becomes believable, with emotional depth, complexity, and truth. Her series also reveals failures of imagination, in her 2-dimensional depiction of bullies (Dudley in particular), and the heteronormative binarism of the series. Fans, thankfully, have enriched her storyworld with fics that imagine what she couldn’t. An adult Dudley struggling with their trans identity and calling on Harry to help with that transformation, which Harry notes, is something the wizarding world is very good at. There are many more.

The second insight is Ngọc Loan Trần’s model for calling in vs. calling out. In their 2013 blog post, Trần outlines how mistakes by those we love can be opportunities for dialogue and transformation, when those who have injured and those who have been injured share their values, engage with “patience and compassion” with a commitment of “genuine care” for each other.

As a cis-gendered woman of a similar age, I am regularly challenged by my now adult children and by my students on the limits of my understanding. I am grateful for every instance of being called-in. Indeed, that model was originally shared with me by a student.

Why Rowling feels she needs to restate her views on trans people now is perturbing. What her own writing makes clear is that these stories are not hers to tell. Thankfully, fans continue to imagine a much more inclusive and diverse wizarding world. If you miss her world or feel excluded from it, there are other new more inclusive voices and stories to explore on Archive of Our Own and other fan fic sites. And, if you feel inspired, you can write your own fics and let your imagination explore what Rowling seemingly can’t.

“Nuit Blanche and Transformational Publics”

Scotiabank Nuit Blanche City Hall 2009

I stumbled on this feature article on our SSHRC funded, social media creative research project. In 2010, Faisal Anwar and I began our investigation of how people were using Twitter as a wayfinding tool during Toronto’s all night arts event, Scotiabank Nuit Blanche

We built a Tweet analytic tool, archived tweets tagged with event specific hashtags (#NuitBlancheTO, #snbTO …), and ran searches based on event and installation names, that mapped people flows through the event’s various zones. 

Over three years, our research expanded to content shared via flickr, YouTube, and Instagram, revealing a communal psychogeography generated over multiple platforms during the 12 hour event and after. 

Presentation on +City / Nuit Blanche and Transformational Publics, 2012.

Working with research assistants, we often found specific moments captured by multiple individuals, offering a proto-photosynth data set that could be restitched, roughly, for a loose, sometimes 360 degree public documentary. 

Presentation on +City / Nuit Blanche and Transformational Publics, 2012.

As I wrote then in an essay published in Public (2012), edited by Jim Drobnick and Jennifer Fisher, “These exchanges make visible the fluid actualization and processual experience of participatory, emergent public(s) that accord with how Michael Warner defines a ‘public’: that it is self-organizing, involves a relation amongst strangers, is simultaneously personal and impersonal in address, is constituted only through attention, and provides a discursive public space.

In addition, we discovered that striking groups of participants would appear over the night in disparate photos and videos, as they traversed Nuit Blanche installations. One year in particular, it was a group of young people wearing oversized mustaches. Another year it was an indie band in costume, playing through the streets.

Presentation on +City / Nuit Blanche and Transformational Publics, 2012.

What I realized very quickly was the depth and scale of information we had available as to individual’s movements and activities, and the potential infringement of individual privacy. The question of privacy in the digital public sphere, however, was complicated by, #1 Twitter’s mandate to share widely, and #2 the use of hashtags which explicitly tag tweets as meant for a wider conversation and viewing by strangers. 

+City data visualization tool tracking #mit8 hashtag during the MIT: public media, private media 2008 Conference, MIT Cambridge MA.

My concerns with data privacy started here. Even with our tool in beta, the data aggregation from Twitter coupled with content analysis on other social media sharing platforms, all public, all accessible, made the outlines of the surveillance state visible.