On The Rapidly Eroding Right to Not Own a Smartphone
Technological Discrimination and Education’s Techno-Fascist Turn
Without our realizing it – or consenting to it – smartphones have in many contexts become mandatory. For example, multi-factor authentication codes and certain apps are mandatory, or near-mandatory, conditions of access to, and participation in, an increasing array of services, institutions, and amenities. This has occurred widely, and without public input or substantive debate. We have been relegated to the status of “users” and clients, and concerns about the implications of these practices for our lives as citizens are dismissed as petty gripes that we need to get over. Millions are, whether reluctantly or enthusiastically, on-board with this “new normal” without having considered its costs or consequences. Others feel and express discontent privately. Meanwhile, major institutions are seemingly deaf to substantive criticisms and objections.
The rapid introduction of these practices and the stigmatization of dissent has encouraged an even more hasty, less mindful approach in their adoption. We often take it as a given that we have little or no choice anyway, and so might as well just get on with it. But where does “it” end?
I’ve been a techno-skeptic since at least the early 2000s, when I saw friends develop what I recognized to be an unhealthy relationship with their cell phones, encouraged by manipulative phone companies and their clearly, sometimes criminally toxic business practices. I was, frankly, horrified. But I was also naive enough to assume that there was still space enough for common sense to eventually break through the encroaching gloom. Over time, I learned to accept that there are different strokes for different folks, and that what had seemed to me like a gratuitous fad and distraction had nevertheless become more or less essential to many, and my usual go-to prescription for their phone-related pressures – you don’t need a cell phone, just use a land line – would likely fall on deaf ears. There were too many reasons for them to believe that the cell phone was the only way – to connect with loved ones, to have employers get ahold of them, to keep up with the latest gossip and trends. To set their mind at ease. There was something about this form of human-technology relationship, however, that continued to strike me as burdensome – more trouble than it was worth. And ultimately a violation of sacred boundaries necessary for a dignified life.
But I sincerely believe that there are different strokes for different folks, and I don’t impose my value judgements on practices that, however aesthetically distasteful I may find them, don’t engender any clearly negative consequences. Because I also sincerely believe that common sense can break through, even when it seems very rare indeed. If I ever had any moral purism about my critical view of these increasingly popular and all-consuming practices, my “opting-out,” it long ago gave way to the realization that many feel they are incapable of opting-out. Any disturbed pity I may have felt has likewise given way to the far greater force of compassion. For people are in a state of enslavement. Sometimes slaves enforce the aims of those enslaving them, and wouldn’t think of doing otherwise.
For several years, I did not have a cell phone and instead used a landline with an “invisible answering machine.” In 2017, I got my first cell phone, a basic flip phone that was pay-as-you-go, which likewise served my purposes for a small fraction of what people were paying on their data-enabled smartphone plans. I also tried out a smartphone for a while, before going back to the flip.
Again, this wasn’t due to any moral or philosophical objection to smartphones per se. But it is important to acknowledge that smartphone is a misnomer – they are actually highly sophisticated computers with phone capability. It is misleading to refer to them by the mundane metonymy “phone,” as nearly everyone does. (“Smart” is also pushing it, but I don’t want to get ahead of myself.) And while they have many wonderful functions, the experience of owning one led me to two negative conclusions. 1) The device is highly distracting, to the extent that it would be disingenuous to insist that one’s distractedness comes down purely to a matter of self-discipline; and 2) that I have no desire to use apps – or to be more precise: I have no desire to live in a world in which I might at any point be required to use an app, and so must carry a computer, an app-enabling device, on my person at all times.
This would make me an adjunct to a computer and its programs of governance, to which I am subject. No longer could technology be said to serve humanity; most of us would exist to serve it. And what this really entails, when you look past the dystopian sci-fi connotations, is that this is just another way we would be serving capital, being subject to surveillance, monitoring and marketing ploys, encouraged and perhaps forced to play status games to prove our worth to others, serving the centralization of wealth and power – processes which alienate us from our status as citizens with rights while imposing upon us the narrow status of worker-consumer. Disenfranchisement and enslavement, in other words.
Each of these points raise important ethical questions about smartphone use in general, and certainly for policies and practices which make apps mandatory conditions of access to participation and services. (While it may be futile to attempt to influence adults’ relationships with their devices, there is still a chance that a wider recognition of the full spectrum of human-device relationships can be achieved, increasing awareness of the various costs and benefits thereof, thus curbing policies and practices incentivizing a particular relationship.)
The disenfranchisement to which I refer is especially noticeable with some of our elderly fellow-citizens, who find it difficult, frustrating and alienating to navigate (or try to navigate) the labyrinthine requirements of accessing what for decades – for generations – were relatively straightforward services, delivered with competence, consistency and respect. My aunt, who is in her 80s, has been using a landline exclusively for her entire life. Last year, her phone company informed her that her line would be removed if she did not accept certain “upgrades.” Since she didn’t want these upgrades, she braced for disconnection and began to look into getting a cell phone. Unfortunately, she didn’t ask me for advice about her purchase, and before I could recommend the basic flip – which, by her own stated criteria, would have served her needs perfectly well, those needs being not for a computer but for a phone – other family members, with the sincerest of intentions, had her try out a couple of different smartphones.
Hearing her relate her experiences with these devices was both concerning and reassuring. Concerning because they recounted real alienation and despair – disconnection, from something which had for the majority of her long life been a stable and reliable source of connection with the world. Reassuring, because she properly identified this, and instead of feeling like she was the problem, felt a dawning sense of indignation about it. One of the things I love about her is that she is as articulate as she is independent, while having retained a humane sensitivity despite encountering cruelty and abuse at various points in her life. At the same time, she is often reticent about making a public fuss about anything and drawing attention to herself. This is due partly to cynicism, but mostly to a pragmatic and rational determination that her articulateness may not be enough to sway others, and her sincere efforts could be in vain. It occurred to me that this must be the experience of many seniors, and that the general public of our brave new world has simply stopped listening closely to them.
Her experience also meant that she could no longer see the changes that had swept most of her fellow citizens’ lives over the previous generation as something to which she was immune. Or indeed, as a product of the considered choices of independent individuals. She too identified a manipulative aspect of the smartphone, likening it to a boss, demanding her attention at any time, usurping her thoughts.
Having long considered this frustrating circumstance, with a view to solving it, I must confess that I believe much of it to be by design. The powerful have immense resources to spend on normalizing certain practices, marketing them with sunny optimism, contrived technocratic urgency, rationalistic paternalism, and silencing (or drowning-out) objections when they arise with stonewalling and gaslighting. How much of the “static” and petty division in today’s world is traceable to this? I’ve often grappled with the complex cultural and ethical ambiguities of our technocratic age, particularly regarding wokeism and covidism, and there is a similar dynamic at play here.
What these things seem to have in common is stigma – the abusive and arbitrary invocation of risk, which good people must co-operate in mitigating, by adopting prescribed safety and security measures, and for which any act of questioning marks one as a bad person, worthy of exclusion. While this has aspects of merely obnoxious “political correctness,” while at other times takes forms that are distinctly abusive – extremist, illiberal – what is unique about our era is the degree to which these circumstances and their outcomes are subtly framed by, and interpreted through, digital channels. This has created an impressive assortment of robust solipsisms reinforced by both algorithm and association. One might have assumed that these are ultimately responsive to reality – that the media through which we obtain so much of our information and perspective on the world is really human at its core – but that is, alas, only an assumption. And there is plenty of evidence to suggest that, while billed as capable of breaking down barriers and connecting people in a world where “anything is possible,” these technologies have a dehumanizing and alienating effect as well, one that remains under-acknowledged.
Of course, there has been some recognition of these problems in some quarters; but the solutions proposed seem to make them worse. For example, rooting-out and censoring “misinformation,” which is often invoked in the most foreboding terms. The reaction may seem reasonable in some instances. During the recent pandemic, many well-meaning people thought that popular platforms and arenas of expression and discourse needed to be screened to filter-out discourse critical of the effort to “flatten the curve” by locking-down, masking, avoiding human contact, and even avoiding leaving one’s home – and later, to screen and filter-out discourse critical of the vaccines. Given that such discourse included a lot of, shall we say, inconvenient truths, the fact that this effort identified itself as “science-based” / “following the science” marked it as especially absurd. Millions knew it, but millions more were terrorized by the propaganda. And given that such discourse included many reasonable criticisms and objections on the basis of conscience, particularly regarding risk-benefit analyses, which should never be deemed self-evident in a secular and open democratic society, and should always be open to lively debate, the moral and intellectual bankruptcy of “zero covid” and mass vaccination is manifest to all who bother to look at the situation honestly and impartially. Science, instead of being a source of insight and an antidote to fear, was enlisted in the service of fear.
Our democracies failed spectacularly in that period, usurped by technocracy, and a wider acknowledgement of the truth of what occurred may still be years away. But it is crucial to understand that it could not have happened without extensive emotional manipulation accompanying the initial misinformation campaign, which effectively sowed the seeds of the hysteria that was later stoked. A fire was lit that took years to put out. Arguably, many of the draconian policies were themselves instituted in order to appease a panicked public. And of course, a significant portion of the public embraced a new form of privileged and cozy domesticity, tweeting “Keep calm and get vaccinated” while having their groceries and consumer goods delivered to their door. A new way of life was also being marketed to us. Any doubts they may have had about the unprecedentedly harsh and invasive public health impacts of the public health measures themselves were silenced by the death and hospitalization numbers on the nightly news (for many, the all-day news), a similarly unprecedented abuse of statistics and diagnostic protocols, calculated to terrorize.
But in many ways, we were witnessing the last gasp of legacy media. The writing was probably on the wall for years before that, with the mass migration to social media, but the pandemic (or rather, the way the pandemic was framed and interpreted) accelerated its demise, as it accelerated the saturation of our lives with all things digital. The establishment went into desperation mode, and we now daily witness the peculiar sight of illiberal liberalism, with institutions and parties throughout the advanced democracies consistently endorsing the shift to more technocracy, even as fundamental rights and equality are eroded. No wonder people fall for “rebellious” individualist-populists like Trump – but if you’d rather ignore the problem, just say Ewww and go on pretending your side is flattening the curve.
“Far-right misinformation” is invoked as a way of encouraging an infantilized public to accept only establishment-validated sources and perspectives. Accordingly, many – who, it pains me to say, identify with historically “left” causes – gratefully accept this cushioned bubble of ignorance. It gives them a semblance of hope – a progress narrative – in a dark world.
But here’s the thing – and I suppose I should preface all my criticisms of our over-technologized world with this: We aren’t going back. A free world is one in which we have the right and responsibility to discover and share the truth for ourselves. That’s where real hope lies. Sure it would be easier to have The Man hold your hand for you, but there is no integrity in that. Just a spiral of delusion and denial. This is the real challenge presented to us by the thing called social media. (Which basically just means online platforms that are participatory in nature, but whose participatory nature and appearance of reflecting public sentiment is in tension with their centralized ownership and attendant moderation practices and economic incentives.) If we view it as one of many research tools, it can be empowering. If we view it as a “cesspool” of toxic and non-credible discourse, then we’re in effect saying that we insist others do our research on the important issues of the day for us.
After all, we who identify with secular scientific-materialist humanism – the modern left-liberal sensibility, broadly speaking – have been waiting for the legions of greedy backward religious idiots to catch-up for a long time so we can get on with this “saving the world” thing. But I guess it’ll be more Netflix and delivery while we wait for them.
Nothing could be, or keep us, further from the truth. And this “epistemological distancing” is a corollary not of any particular source (we should not be prejudiced against “establishment” voices either), but of an inability to reflect. To unplug. We really are all part of the solution, and the world we want. It is only in a context of thoroughly saturated and seduced attention that so many important matters remain overlooked, and hypocrisy and denial remain so intransigent.
It isn’t that the world is divided into evil conspirators, the mindless sheep carrying out their plans, and the Good Guys – the Resistance. It’s simply that those who have a vested interest in stifling debate, and the resources to do so, often direct those resources toward that end. Call me cynical. But I also believe that we the people must direct our own resources of reason, speech, and love toward a different outcome: a truly free society. A society that values truth and human dignity above money. Even as many feel financial pressures that may be unfamiliar, the importance of this point is all the greater.
The case to be considered here pertains to the context of an educational institution. Such institutions have long been thought of as exemplifying truth and dignity and empowerment. And that is why I think it worthwhile to expound at length on the injustice of a “smartphone mandate” at the school I attended recently as a mature student, George Brown College in Toronto. To relate some of my recent harrowing experiences, to share insights and arguments that others may find useful, and to express solidarity with all who search for hope in this real-life dystopia.
About a year ago, I found myself in the peculiar position of being unable to finish my 2-year college business program due to an unforeseen accessibility issue. The issue arose about half-way through my program, in mid-2023, and although I had managed to navigate the program in spite of it for a couple of semesters, I found myself at a loss. I was set to fail a course due to my inability to access a good portion of its content.
A new multi-factor authentication (MFA) policy had been instituted by the college that required the use of a smartphone app to access several important educational resources. By choice, I did not (and do not) own a smartphone. The stated purpose of the MFA system was related to cyber-security. But as soon as I read the first email saying “Don’t get locked out! Download the MFA app today,” I knew it was fundamentally misguided and immoral – and itself dangerous – presenting a significant barrier to students such as myself who didn’t own a smartphone, and violating the right of all students to not own a smartphone. But I wouldn’t say I was surprised that they (whoever “they” really are) were trying to institute it, or that the vast majority of those I was to speak to about it had little to offer besides a resigned shrug. We’re all kind of inured to these constant “updates” and stipulations from unaccountable IT bureaucrats, and used to jumping through their hoops. But I was deeply disappointed, and knew the policy must be resisted.
I don’t mean to imply that there is a coherent conspiracy of “they” out there. But it’s not like we asked for more hoops to jump through. Someone is putting them there. Their willingness to do so is directly tied to our willingness to comply – but we are still being misled.
The fact that I was a mature student likely had something to do with my unwillingness to comply – I remembered a different time, and doubted the rationale of cyber-security. Moreover, I still had, and most days still have, enough reverence for education to believe that it can and must be accessible to all. I wouldn’t say I idealize education, but I wonder how anyone can doubt that our expectations of it have fallen abysmally low of late – that it has become transactional and in some cases outright predatory, debasing and insulting the intelligence of students and the public, deceiving us while taking even more of our money while the economy continues to crumble and inquality worsens. (I’ll have more to say about this in regards to A.I. in academic contexts later.)
Notably, some schools have moved to ban smartphones in class, based not only on teachers’ (and students’) experience of them as a disruptive and distracting influence, but actual research showing this, and the expressed concerns of a growing number of parents. However, to my current knowledge these are high schools. Colleges and universities have taken the MFA approach – what I refer to as “smartphone mandates.”
We can speculate that such institutions might regard their students as more mature and disciplined with their smartphone usage. Or maybe it’s that their business models require a greater emphasis on “cyber-security.” What that really entails will have to be unpacked. Suffice to say, once we critically evaluate what we are giving up to achieve this ostensible security, the costs are clearly not worth it. Moreover, it becomes evident that the spectre of threats to infrastructure and data, personal and institutional, has been invoked to achieve that crucial first step of compliance. After which “this is just how we do things around here,” and no one feels they can voice any valid objection to it.
Yet there are valid objections. And it’s time they were heard.
The purpose of this piece is not only to articulate them, but also to illustrate that the institutions to which I refer are militantly opposed to having them be heard and debated publicly. To say they shun the spotlight would be putting it mildly. But it’s not because they’re just a bunch of mild-mannered bureaucrats. Many of them are, of course – diligent functionaries selected for their ability to carry out often highly complex directives in a spirit of maximum agreeability – but many are technocrats, attempting to remake truth itself in their image – to manufacture consent – with dire implications for freedom and democracy globally.
(In this regard, it’s worth noting how much the world seems to have shrunk of late. But far from being merely a pleasant product of greater connectivity, it is also a dangerous illusion leading to a breakdown in freedom, democracy, and autonomy – maybe even reason itself and its chief export, truth.)
I asked the college on a few occasions during my program if I, an enrolled student, was going to be denied access to said educational resources because I own a flip phone and not a smartphone. I received no response.
At first, I gave the college the benefit of the doubt about its stated commitment to cyber-security. Approaching the matter with a conciliatory attitude, I repeated my favorite mantra: this is all a big misunderstanding. I’m as cynical as anyone can be about the digital age – that isn’t new, it’s a perspective built-up over the last couple decades – but I’m not cynical about education. That’s why I was there, and where I was determined to keep my focus. In fact, I had graduated from a different 2-year program at the same college in 2020, at the start of the pandemic, when we were granted the mixed blessing of an impromptu shift to online classes. It wouldn’t be long until the college got on the bandwagon of online delivery, and extended it well beyond when any rationale of virus avoidance existed, to the point that it’s now one more thing we can file under the heading “the new normal.” (Higher cost, inferior service.) But I made sure to prioritize my objections, and humbly ask whether the changes I was witnessing lived up to the college’s own stated values – and secular, small-l liberal education in Canada as I have known it.
As mentioned, I was an older student. It might be argued that my values – in this case, exercising my right to not own a smartphone – were and are out of step with the times, and with the direction the college was taking. Yet I had always seen GBC as a model of inclusion, diversity, and equity (in the best, non-co-opted senses of those words). For example, it hosts a large population of international students who pay high tuition for the opportunity to study there. They make enormous sacrifices to obtain Canadian experience and connections. But we should all be concerned about the narrowness of a social and institutional context in which the right to not own a smartphone is ignored with blithe technocratic arrogance. A large portion of this category of students are desperately seeking Canadian experience, and some are on a path to long-term residency or citizenship. This is their first time living in Canada – a country that prides itself on being an exceptionally free, open and democratic society. It is also an exceptionally wealthy country – a “land of opportunity” – and one which celebrates meritocracy. But meritocracy does not mean compliance with arbitrary and senseless rules, however much hard work this is accompanied by. The problem with policies of mandatory smartphone ownership (in this case, policies that make access to basic education conditional on smartphone ownership) and the norms that enable them, is that they reinforce habits of compliance while undermining meritocracy (in a manner that is becoming all too familiar these days), and even conflating such “due diligence” with actual competence and actual integrity. Yet one could hardly find a better example of gatekeeping and, in a word, corruption. A misrepresentation – a betrayal – of the public interest of this great country, predicated on the exploitation of trusting and hopeful newcomers. An abuse of power by a regime declaring itself supreme – the embodiment of human resilience and ingenuity, of “progress” and “innovation.”
I don’t have much technical knowledge about cyber-security threats to personal and institutional data, but I am not flippant about such concerns. I simply recognized that the MFA policy amounted to a smartphone mandate, violating the right of all students to not own a smartphone. Virtually all students owned a smartphone and likely wouldn’t find the installation of the MFA app to be anything but a minor inconvenience; yet there appeared to be no room for me. Many I spoke with understood that my reasons were sound, and I recall speaking to one young woman whose objections to the policy went beyond a mere quibble with its inconvenience, and evinced principled indignation. Regrettably, I didn’t get her contact information with a view to bringing a more substantive case, perhaps a class-action lawsuit, before the college.
Putting aside the more sinister, “techno-slavery” implications of such practices, it isn’t hard to see that the main motivation behind them has to do with the mundanely rational matter of liability. The scale and sensitivity of the data being managed, and the chosen methods of doing so, seemingly necessitates these layers of security. But the public didn’t choose these methods, and we know that said practices were unheard-of only a few years ago. So, institutions can get away with discriminating on these bases because the public allows it. But it’s a stretch to say that the public really does allow it, in the sense of giving informed consent. So the public’s trust is being abused.
This is similar to the issue of vaccine mandates during the recent pandemic, which were introduced and maintained for a time despite it being known that the vaccine didn’t reliably curb transmission. Clearly unscientific and discriminatory; but if someone got sick on an organization’s premises, the organization could always say it was “taking every precaution to ensure the safety of its workers and the public.” And that it was doing so in accordance with expert recommendations.
It is almost unbelievable that institutions of higher learning would betray the public in this way. But all who value truth must look at the evidence and acknowledge the corruption. Then, if they judge it to be a worthwhile use of their precious energy, advocate for change – or the dismantling of such institutions.
Over the past couple of decades, we have generally come to accept that certain programs or disciplines require the purchase of specialized technology, and even that all students at a certain level must access a computer and wi-fi to participate in various aspects of learning (e.g. typing essays). To many, the assumption that everyone already owns a smartphone might not seem very different, if they even thought to make a conscious comparison. This has become normal, and new technologies like QR codes appear also to be becoming more ubiquitous. Most infamously – lest we forget recent history – so-called “vaccine passports” were employed as discriminatory and stigmatizing means of excluding the unvaccinated from previously public places. Ostensibly, this was to prevent transmission of the virus. It is now widely acknowledged to have been simply a compliance exercise using deceptive and discrediting methods, violating the principle of informed consent.
The fact of ubiquitous smartphone ownership, and the increasingly “digitized” form that so much of public life has come to take as a result, have important and under-acknowledged implications for notions of citizenship, equity, accessibility, community, and mental health. Therefore, respected educational institutions’ adoption of such a measure as the MFA says something about their vision of our society, what it can be, and what it should be. And while I take a quite different perspective, I think it is most telling that no substantive debate has been had about this.
Since I am calling for debate, I have tried to give the college the benefit of the doubt about this matter, to “steel-man” their case as much as possible. There seems to be little doubt that the dominant priorities in this case are at least as much about economy as they are about security. I understand that the college considers its role to be one of aligning the interests, practices and skills of students – the incipient workforce – with the interests and practices of society in a broad sense, and with the skill-requirements of various industries specifically. Indeed, over six years I entered no less than four programs at GBC – one a free upgrading program which I made grateful use of, one I dropped after one semester, one I earned a diploma in, and one I was prevented from finishing, under protest, due to a misalignment of values between myself and the college.
I suspect that the worldview behind the MFA policy holds it to be the case that not owning a smartphone negatively impacts one’s economic priorities and potential. I believe that policymakers’ hearts are generally in the right place – that most of the time, they sincerely believe their statements about inclusivity, accessibility, and equity, and that it is not necessarily inconsistent with such values to deem foregoing smartphone ownership to be a case of willfully putting oneself on the wrong side of the “digital divide” – and at a social and economic disadvantage. The modern-day equivalent of a vow of poverty, a kind of willful disabling of one’s economic opportunities and developmental potential.
However, any such assumption would evince a failure on the part of the college to properly appreciate the ways in which smartphone ownership can negatively impact one’s health, well-being, opportunities, and potential. Or, to put it another way, it indicates a tendency to overlook valid reasons for choosing to forego smartphone ownership, and unduly incentivize an option that some – a growing number, in fact – consider undesirable.
Moreover, the case can just as easily be made that the policy is a hasty concession to the vicissitudes of the present economy, which have seen many face desperate circumstances (myself included) – a reaction conceived in incentivized despair, rather than in pragmatic, conscientious, and forward-thinking hope, and that it constitutes willful disablement of our very society – an exchange of freedom and dignity for the promise of financial opportunity (Big Tech contracts, etc.) and economic competitiveness in the “global economy.”
To me, it is self-evident that the policy reflects an agenda that has not been subject to sufficient scrutiny. But that could simply be because, having long eschewed smartphone ownership, and maintaining a sense of wariness about the invasiveness of such devices, the costs of such a policy are especially clear to me. To those who own a smartphone, and are consequently apt to regard such demands as mere inconveniences, the rationale of “security” is for the time being non-problematic. After all, who in their right mind would pass up enhanced security? Or shrug-off concerns the Smart People have declared to be of great import? Who are we to gainsay them? Considered in the abstract, it makes perfect sense. Even if they don’t give it a moment’s thought, surely they can understand why an institution would go to great lengths to protect its sensitive data. Even if this ends up violating sensitive rights.
As a member of the general public and ex-student, I have next to no influence over college policy. Such institutions are not democracies. This is a question of leadership, and although I am not confident in my understanding of its goals and interests, I have no trouble asserting that the discriminatory policy under discussion is at odds with the public interest. We might hope that, eventually, “the market” will decide that institutions with smartphone mandates are not worthy of their business; but that might beg the question here. For my point is that financial considerations are clearly a major factor in the establishment of such discriminatory practices in the first place, corporate influence has led to misrepresentation of the public interest, and this rationale, namely cyber-security, has precluded a proper discussion of what is in the public interest, namely the question of whether we have the right to not own a smartphone.
Having been alive a while longer than most of the college’s students, most of whom probably came of age in a world where smartphones (and social media, and a lot of other insidious digital forms) were already somewhat ubiquitous, I feel a duty to raise these concerns. But whatever the implications of such technologies – and we must recognize that they are not all good or bad, and that there is a diverse spectrum of individual preference – a key concern is not being adequately addressed. Thus far, we have merely assumed that accountability is being upheld. And that means it most certainly isn’t. There has not been adequate discussion about the costs and benefits of the increasingly wide implementation of such technologies in public life, as a condition of participation and access to services.
I am fully aware that the grounds on which my complaint rests are unfamiliar. My arguments are about alienation – not only alienation from our democratic agency, but from ourselves and each other. Technology-based discrimination is not recognized in the Charter of Rights and Freedoms, the Ontario Human Rights Code, or the human rights policy of the college I attended – or any college of which I am aware, and it is widely practiced by such institutions. The assumption is pervasive that the adoption of technology, especially if it is by a large number, constitutes good evidence of its practical utility and, at a great enough level of complexity – its indispensability. It is thought to be enabling – facilitating access – rather than presenting new obstacles to access. It serves us, but also has its own ends that we are obligated to serve. People think that any rejection of it is regressive – moving us backwards. Little do we notice how we have become trained, and how much of our attention has become fixated on the superficial. That is, on our interfaces.
Moreover, the characterization, or narrative, of benign practical utility is facilitated in no small part by a process of promotion, much different today than it was in decades and centuries past. Today, “upgrades” are cast as good and necessary and of indisputable value, and we all know our place when it comes to discussing them. We didn’t ask for them, and we certainly don’t know how to refuse them. And this amounts to a unique and insidious type of coercion. The reason it isn’t recognized as such, besides the shrewdness of the aforementioned promotional campaign, is that although much of the public has (consciously or unconsciously) pursued incentives, a corresponding assessment of the costs has not been undertaken. In this case, we are talking about costs in terms of access and autonomy.
While many have come to regard education and technology as inextricably linked, this is purely marketing on the part of the technocratic establishment – the so-called “professional managerial class” – and there has been scant attention paid to how they should remain distinct, particularly given the implications for human rights and accessibility. (That is, for liberty.) It certainly seems counterintuitive that education in the information age and in a “knowledge economy” would become more rigidly ideological, less tolerant of cultural and individual diversity, and place more arbitrary demands on students. But that is the trend. We must not allow the lip-service to equity, diversity, and empowerment obscure that simple and tragic fact about our modern world. We must acknowledge the horrendous betrayal of the public – all of us potential students, all of us interested in truth and human dignity and development – that this constitutes. We are witnessing unconscionable corruption at a scale that is historically unprecedented. Addressing it must be our top priority.
This is not the first time that security has been invoked as a means of preserving allegiance to certain exclusionary practices, and indeed privileges. The ideal of a free and open society entails that a minimum number of conditions are placed upon the full participation of citizens in the making of contributions and the enjoyment of opportunities.
I and others voicing these concerns are of sound mind, and do not want to be granted any exemptions or accommodations. We want such policies and norms to be dissolved altogether. We should not be thought of as users refusing a self-evidently valuable opportunity or griping about conditions that our more rational peers quietly accept, but rather as citizens embracing other, no less valuable opportunities. A corollary of which is that the opportunities granted by a truly publicly-oriented institution ought to remain compatible with a broad range of lifestyles, particularly inasmuch as these reflect fundamental human rights, dignity, and healthy human development. And that we therefore see the tendency to adopt such conditions without due examination and reflection as the central problem.
The vision of a free and inclusive public sphere has suffered greatly over the last few years. In a variety of ways, society has become more insular, divided, and technocratic, as we have seen – or as the case may be, not seen – the emergence of new forms of marginalization and stigmatization. Despite the numerous ill effects of such policies and practices, the public has been regularly admonished that it has all been done in the name of safety and security – not to mention progress and prosperity. And that, almost as a direct corollary of this, no very great errors have been made.
There is no question that I have found this negligence a motivating factor in my own efforts to highlight errors where I believe they have occurred or are occurring. I do so in the full awareness that 1) I am easily dismissed as an individual who is making their judgements based on a combination of misinformation and subsequently misinterpreted isolated experience, hastily extrapolated to the wider world; and 2) most of those I engage with about these concerns have already accepted the framing of what I am calling “errors” as being unfortunate concessions made in a tumultuous and sometimes dangerous time. This illustrates the despair inherent in fear. It seems we may never learn. Fear has brought us very far from the free and open society we all deserve – and all say we want.
Many of my classmates in college used WhatsApp before 2020, and rather more so in subsequent years. In my later program (2023-2024), I was surprised to see class instructors posting what they referred to as “the class WhatsApp link.” Stating my preference not to use WhatsApp (a result of not using any smartphone apps) had been met with the admonition from some fellow-students that “We use WhatsApp (to discuss course material, plan assignments, etc.).” The fact that those in positions of authority condone this – what amounts to technological discrimination, albeit in less explicit terms than the MFA policy – has no less important implications. I would argue that this unfairly excludes those expressing a reasonable preference based on reasonable expectations.
Additionally, I know of at least two courses that openly advised using generative A.I. such as ChatGPT in the writing – perhaps “construction” would be a better word – of assignments. One can draw their own conclusions about the consequences of this practice, not only for student learning, but clear student evaluation. (These two are obviously very closely linked, if one believes that formal educational institutions are not themselves in the business of producing artificial intelligence.)
Inasmuch as the new habit of deferring to technological intrusions into the classroom is accepted (along with a high tolerance for technological breakdown and the amount of time it eats up), and even presumed a net-beneficial and voluntary adaptation on the part of students themselves, one might think this indicative of a kind of “empowerment.” But what has really occurred, as in the case of smartphones in high school classrooms, is that teachers have been disempowered. Their role – whether they know it or not – is to teach. And they can’t do that when they can have no expectation of students’ attention and students can have no expectation of being fairly evaluated.
As readers can imagine, I did not navigate with quiet stoicism, but while occasionally raising my concerns with college administrators and others. But no one responded to my emails, and individuals I spoke with in person expressed the kind of empathy we’re all used to giving about coping with the daily aggravations of technology. It was appreciated, of course; but they shared neither my sense of urgency (if not outrage) about the matter, nor any suggestions about finding a satisfactory resolution.
Which seems strange. If my concerns were warranted, and the obstacle aggravating, surely finding a solution ought to have been a priority. Even a convincing reassurance that something could be done would have been a ray of sunshine. But perfunctory indifference reigned, and no one even had the chutzpah to say Suck it up, buttercup. The message was instead I can’t be bothered with it right now, let’s move on. This breezy dismissiveness was very much at odds with the substance of the problem – leading me to think of it as a kind of willful blindness, a condition that persists because we are being encouraged to adopt it.
I noted the accessibility barriers immediately, and took steps to bring them to the attention of teachers and administrators (as well as students, though I wasn’t hopeful about changing their minds or organizing resistance to the policy at the time). Eventually I couldn’t use the college’s email service, and had to use my personal email. I was also unable to undertake some group projects with my assigned groups when group members insisted “We use WhatsApp” (the “We are Borg” was implied), and so completed and submitted them entirely individually. This didn’t go over well, because I had not asked the professor for permission to take an alternative course of action. (No brownie points for taking initiative.) The professor was mostly just put-out by having more to mark, and while I hoped she understood my justification relating to a deeper issue of discrimination, she said
Whatsapp is used by the students to communicate as a class. This trend began when all classes moved online during the pandemic. This is not sanctioned technology from GBC as far as I know.
But I’m not anti-WhatsApp, and never have been. Nor do I think any learning institution should be. Of course students should be free to use WhatsApp and similar apps. The technology was not “sanctioned from” the college, but it was also not mandatory. (Then or, to the best of my knowledge, now.) I was not asking for a WhatsApp ban, I was simply proceeding on the correct assumption that it was not mandatory.
She also stated that there were “no smart phone requirements” in her course. But while there weren’t any such explicit requirements, and while I was indeed able to access course material for that course in spite of the MFA policy, this didn’t hold true for all courses in the program. And apparently, without clarification of what “no smart phone requirements” means in the context of group assignments, there would exist a de facto requirement.
Later, after sitting down with someone from the school’s human rights / equity office, and having a discussion that at first seemed productive – any discussion in which you can articulate your concerns fully is a positive thing – but ultimately went nowhere, I had to accept that my program couldn’t be completed. Additionally, the college refused to strike the F grade from the previous semester from my record, despite it being clearly due to encountering barriers to learning created by the MFA policy.
In a written judgement, they said that, “discrimination based on inaccessibility to technology is not listed as a protected ground in the Human Rights Code of Ontario. The phrase “inaccessibility to technology” strikes me as somewhat disingenuous. Smartphones are not a technology inaccessible to me. I was simply exercising my right to not own one, a right GBC infringed by instituting the MFA policy.
Now personally, I don’t see why this should be so hard. Consulting the Human Rights Code is unnecessary. Just remove the damn policy. It wouldn’t hurt to also acknowledge the mistake. It isn’t the technology that’s inaccessible to me, it’s the education. Moreover, making access to said education conditional on the use of said technology incentivizes the latter, in a manner inconsistent with reasonable and well-articulated concerns about the adverse impacts of the latter, both to students’ well-being and to learning itself. This seems like an awful lot of trouble to go to, just to ensure compliance with something that has no discernible value to students.
Part of how corrupt institutions and those administering them function is by taking conciliatory steps – some of which are sincere, and some of which are calculated attempts to “manage” complainants, objectors, and would-be whistleblowers, and get rid of them with minimal fuss. That is, when ignoring them doesn’t work.
Remarkably, a couple of administrators would later bring up the possibility of an exemption from the MFA policy. But this option had never been clearly stated in communications from the college. The language always demanded compliance (e.g. “Don’t get locked out!”), and the disregard for the right of all students to not own a smartphone was always manifest. (As I mentioned, not only with regard to this specific policy but in aspects of the culture of the college and its classrooms.)
They were giving me the run-around. Pretending to be oblivious. Trying to gaslight me. Trying to exhaust me. Pretty standard bureaucratic practice. And it was never lost on me that most of the public are themselves oblivious, beaten-down, and hopeless about the possibility of positive change. Compliance regimes have created an atmosphere of anxiety and apathy. They have abysmally low expectations of autonomy, agency and access. And, with the younger generation especially, no memory of anything different.
While it is undoubtedly true that times have changed and new security challenges have arisen in recent years, some of us are old enough to remember when a policy such as the MFA would be unthinkable, and when this whole discussion would seem absurd. What memo did I not receive?
The way “times have changed” lately amounts to a further shift to online, remote or screen-based delivery of services, and numerous new verification codes citing security as the all-purpose justification, and QR codes citing convenience and accessibility – virtually all of which has occurred without transparent examination of the ramifications for human rights and autonomy, or human dignity. This gentrification of the public sphere is defined by false and shallow lip-service to inclusivity, access and techno-optimist empowerment. And of course, safety. So things have not changed for the better. And the only way to overcome this profitable absence of transparency – I believe the word is corruption – is to start showing real leadership.
I don’t mean to over-generalize here. Of course the world has changed, and I’m not advocating the full-scale dismantling of what I have pejoratively described as techno-fascist gentrification. Viewpoints on this matter are subjective and nuanced, and may differ from person to person. I recognize that I am an outlier. My request from the beginning has been straightforward: that the college acknowledge students’ right to not own a smartphone. If legal bodies such as the Ontario Human Rights Tribunal, or provincial or national governments acknowledge and enshrine this right, even better.
To return to the facts of my particular case.
1) A discriminatory policy was implemented in the middle of my program.
2) I repeatedly raised concerns about the unfair consequences and implications of the policy.
3) My attempts to reach out were ignored.
4) In early 2024, upon realizing that, due to significant academic, financial and psychological pressures, I was incapable of finishing my program, and beginning the process of withdrawing, a couple of administrators revealed to me that there are (and allegedly, were) avenues for me to proceed without experiencing the barriers to access that I did. In actual fact, these were not at any time made evident to me or the student population in general.
At the very least, the failure to inform me of the MFA exemption option would constitute negligence on the part of GBC. But regardless, upon hearing of this option I rejected it, on the grounds that it appeared to be an accommodation that was not sufficiently ameliorating of the negative impacts of the policy, and in fact might normalize them. In other words, my objection is on the basis of conscience. But there was another reason: the exemption option was never explicitly stated before then. It seemed therefore to be, not a genuine exemption to the “smartphone mandate,” but a means of appeasing objectors (and silencing objections), while ensuring that dissent remained something that would seem prohibitively onerous to the majority, thus obtaining mass compliance. A high degree of compliance is the “point” of the policy –genuine, explicit exemptions would eventually nullify it, as the immoral aspects of the policy became apparent to more people. Then the college would take a second look at its revenue projections, and rescind the policy. So, whether this negligence was willful or not can be debated. But I wasn’t born yesterday.

Tho a bit uneccessarily long, I do agree wholeheartedly.
But it could be said in more simple terms.
Mandates are evil.
Ban mandates.
2FA are a fukin pain in the ass, they do NOT improve "security" because nearly all corporate intrusions are unrelated to use precautions.
A elderly lady friend of mine who passed away recently was told by her bank she could. I longer pay her utility bills in person (she'd been walking the three blocks to her bank for 50 years). She had no cell phone and no computer and no email. She only died 5 years ago.
This world is insane.
I love tech.
But I hate tech.
You say it isn't going away.
You're not wrong, there's no human willpower to make it go away.
Only a solar catastrophe could achieve that. An electromagnetic pulse that would kill all electronics for decades.
My friends think I'm mean for wanting that. They tell me: "but so many people would die were it not for all the technological assistance we are DEPENDENT on.
And honestly, I could not care less about there argument.
There are too many humans and we live way too long. We calm an NOT reduce that voluntarily. Only a natural catastrophe could do it "fairly" (in that rich and poor are equally affected). (In fact the rich would likely benefit, because ALL this shit is to benefit the rich, to the great stupidity of my tech DEPENDENT leftist friends!