Ethical Shenanigans, Mental Health, and Machine Learning

Or: it’s not just your political affiliation that’s being monitored, and it’s not just rival political factions doing it.

London Graves
4 min readFeb 18, 2021

I’ve been diagnosed with bipolar disorder. I received that diagnosis when I was around 18 years old, partly because my mom had been diagnosed with the same disorder. The diagnostic criteria were met, but as a lot of folks with significant time spent around mental healthcare will tell you, overlapping diagnoses happen quite a bit. Treatment can be a moving target, even when there is not a dual diagnosis.

I’m 31 now. I’ve learned a lot about my brain and how to predict some of its shifts, the ebbing and flowing of things, and I think I’ve got a better handle on it than I did ten years ago. Little did I realize, other things in this world have gotten better at tracking that kind of thing, too.

There’s a fabulous TED talk by Zeynep Tufekci called “We’re building a dystopia just to make people click on ads.” In it, she discusses the phenomenon of radicalization through the lens of algorithms designed to make a person stay on a website and continue to click through to the next item, and the next, and the next…

“It’s like you’ll never be hardcore enough for YouTube,” Tufekci says, after telling about a seemingly benign or neutral experience:

I once watched a video about vegetarianism on YouTube and YouTube recommended and autoplayed a video about being vegan.

As a vegan myself, I’m torn. I don’t feel that veganism is a “hardcore,” idea, but I do understand where she’s coming from. These are the same kinds of algorithmic processes that can lead one down the rabbit hole of hate-filled political groups, uncovering and exploiting any latent tendency toward white supremacy, antisemitism, or fascism.

But that’s not all. Tufekci discusses an experiment designed to answer the question of whether machine learning can be made to predict when a person with bipolar disorder will slide into a manic episode. Not surprisingly, the answer was yes.

The uninitiated may not understand why this is an issue. Let me explain.

Depression is characterized by low motivation, general lack of interest in things, even those you would normally enjoy, and often low energy and lethargy. Bipolar disorder includes both a manic and a depressive component. Type I bipolar disorder includes true mania and rapid-cycling types, the latter meaning that there are four or more mania-depression cycles per year. Type II bipolar disorder patients tend to spend more time in the depressive phase and experience hypomania rather than true mania.

But don’t get it twisted: hypomania is nothing to sneeze at.

Mania in any stripe tends to come with lowered impulse control. That’s why it matters that machines can learn to predict and take advantage of it. Part of what makes mania dangerous is that it can lead to spending money you don’t have. Following me yet?

Now let’s think about other disorders of mental health. Addiction, including alcoholism, gambling, and so on, may have predictive markers as well. Taking advantage of people with difficulties like these should be considered a general no-no, as a basic ethical matter. Or, in the words of Jeff Goldblum’s character from Jurassic Park:

Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.

It could go the other direction, though. Machine learning could, in theory, be used to predict a user’s suicidal ideation or self-harm tendencies. It might even be helpful in preventing some of that, although I must state up front that I’m not sure exactly how that would work.

In uses regarding people in crisis, or when a crisis may be impending, it might be used to show people links to services that offer help, someone to talk to who can listen or help them get to safety if necessary, or even neutral distractions that don’t exist to siphon cash out of the person suffering. It doesn’t have to be this big, scary thing used only to line the pockets of a privileged few rich white dudes behind the curtain, so to speak.

One lesson we can learn from this is that data and machine learning can be leveraged for a variety of purposes, some good and some bad, some respectful and others exploitative. And on more than one occasion, within a few hours of having a face-to-face conversation, I began to see related ads, mostly on Facebook. Believe it or not; I don’t care. I know what I saw.

It seems like it should be illegal, but there also isn’t anything I can do about it. As frustrating as that is, though, there are people trying to use data for good, and that’s not nothing.

--

--

London Graves

Queer vegan cryptid trying their best to survive late-stage capitalism while helping others do the same.