Technology has developed and altered human life so swiftly that it’s left many of society’s institutions ill-equipped to handle its faults.
This article was made possible because of the generous support of DAME members. We urgently need your help to keep publishing. Will you contribute just $5 a month to support our journalism?
Back in 1962, when animators William Hanna and Joseph Barbera pictured the future, they imagined a world where technology solved any and all inconvenience. Forget grocery shopping, dinner can be teleported right to the table. Can’t find the energy to brush your teeth? A robot can do it for you. Weekday traffic is also nonexistent because cars can fly! Or, simply teleconference into work! So goes the premise of the animated series The Jetsons, where the characters are saved from the banalities of daily labor thanks to nifty, push-button devices yet complain about how hard life is—à la any classic sitcom.
Fast forward nearly 60 years, and much of the tech Hanna and Barbera dreamed up actually exists, like drones, jetpacks, and smartwatches. The technology of today, however, has seemingly caused more inconvenience than it has solved, and in the process, exposed some of the more ugly aspects of humanity. For decades, people turned toward tech as a source of optimism for the future, and, in short, to solve all of society’s most convoluted obstacles. It’s not an overstatement to say that optimism has since soured. Social media platforms originally designed to better connect the world have only made us more insular, lonely, and susceptible to misinformation. Artificial intelligence (AI), once lauded as potentially better than human intelligence, is embedded with our same biases and blindspots. And the technology industry is notoriously gatekept, for both diverse and lower socioeconomic populations, and creates products with price tags so hefty the digital divide is becoming wider by the day.
But before we start asking ourselves if tech can still save us, perhaps we are better off re-evaluating why we look to tech in the first place. At its core, technology is simply a tool, developed by and for humans, to solve various problems. So why do we place so much significance on its ability to change our livelihoods? And when it doesn’t, why do we use it as a scapegoat instead of as an opportunity to look internally?
“There’s nothing wrong with being excited about technology and excited about its potential in what it can do for us and things that it might be able to ameliorate or improve,” said Louisa Heinrich, founder of Superhuman Limited, which promotes “human-centric” technology. “But those expectations need to be tempered with a dose of reality.”
Take social media, perhaps the most widely accessed modern tech transformation with nearly 4 billion users each day, it’s effectively synonymous with the internet. But a reality for many social media users over the last year has been the increased prevalence of misinformation and conspiracy theories throughout their newsfeeds.
During the 2020 election cycle, coupled with the coronavirus pandemic, conspiracies flourished online and in many cases, continue to be pushed by influencers and political actors with large followings. After months of not doing much, some platforms banded together to impose strict crackdowns on groups like QAnon and former President Donald Trump that were known for inciting coordinated harassment campaigns and spreading disinformation. While the moves were effective in lowering the amount of misinformation online, it’s not been completely eradicated. Right now, there is no centralized way to police misinformation across the internet. While the internet is public domain, the platforms we use to connect with one another are owned and run by private companies, which, at the end of the day, are keeping an eye on their bottom line. And despite calls from critics and legislators to rein in Big Tech, it may still take years before we see laws put in place or for companies to be broken up or misinformation better controlled. It may never be, according to Joseph Uscinski, an associate professor of political science at the University of Florida and co-author of American Conspiracy Theories.
“I think with all of the bans happening online, what we’re going to start to see is that it’s not going to make a difference in what people believe,” he said. “We’re going to realize it’s not the medium that’s the problem, it’s the people. The algorithms aren’t making up conspiracy theories.”
Sites like Facebook rely on artificial intelligence and third-party fact-checkers to reduce distribution of misinformation by removing hateful content or labeling it. But the same algorithm-powered AI is what keeps users on Facebook, by promoting emotionally engaging content, which has proven to be divisive and exploitative.
“AI is improving exponentially all the time,” said Yotam Ophir, an assistant professor of communication at the University at Buffalo that has been studying misinformation for over a decade. “We are going to have better tools for detecting misleading claims and conspiracies, but the question is: How will we decide to use that technology?”
Just as we’ve seen the fault lines in social media, the cracks in how we’ve adapted cost-saving technology, like facial recognition, algorithmic decision-making, cybersecurity, and AI, are beginning to show. Many institutions, like police departments and universities, rely on technologies to do their daily jobs despite highly publicized examples of it being faulty, and, oftentimes, racially discriminatory. There are dozens of cases where facial recognition software identified Black men for crimes they did not commit, leading to wrongful arrests. During the monthslong Black Lives Matter protests last summer, police used facial recognition to identify protestors and establish a database of images, leading civil rights activists to question whether the act violated the First Amendment and protestors’ right to privacy. And companies like IBM, Amazon, and Microsoft pledged to impose a one-year moratorium on facial recognition software sales to the government, but kicked the responsibility back to the government to provide guidance on ethical AI. There’s money to be made here, too. The market for “facial biometrics” is expected to reach $375 million in the next four years.
“Facial recognition is a technology that frankly shouldn’t be developed, shouldn’t exist, and shouldn’t be perfected,” said Lia Holland, campaign director for the nonprofit advocacy group Fight for the Future. “We need to recognize what priorities we should have going into the future, and the biggest priority should be privacy as a fundamental human right, which has drastically eroded through technology.”
The inherent flaws of these technologies often leads to public outcry over whether or not they should be implemented or even developed. But it’s easy to misplace the blame, according to Holland. “There’s a very dangerous perspective that technology is smarter than us, and that artificial intelligence is better than human intelligence,” they said. “The use of these technologies is becoming a way to remove the personal element from inflicting racism or harm because if the computer made you do it, the computer is always right.”
And even though technology seems widely accessible, not everyone has it. The price of new tech that’s been deemed necessary to exist today, such as smartphones, personal laptops, and at-home Wi-Fi, has surged in recent years and doesn’t show signs of slowing, while rural broadband initiatives are disjointed at best. The digital divide is particularly prevalent in education. When the pandemic pushed everyone home and schooling online, the digital divide was on full display.
“The digital divide has many layers to it,” said Carolyn Heinrich, professor of public policy and education at Vanderbilt University. “First, can you get internet? But the most important is do you have people who can support you and leverage the tools? If you are a kid who is taking online classes during the pandemic and you didn’t have someone with technical knowledge at home, you might not be able to log in that day.”
Lack of technology seriously dampens a student’s ability to perform well academically. Some students have had to camp out at fast-food restaurants just to use Wi-Fi to turn assignments in on time. Remote learning has also contributed to skyrocketing levels of depression and anxiety in students, and compounded the already-existing racial disparities in access to quality education among Black and Latinx students. Experts say the situation has improved since last spring as students adapted to virtual classrooms and school districts offered alternative options like free tablets, discounted internet subscriptions, and even Wi-Fi busses, but much damage has already been done.
These systemic failures are a result of technology evolving at a pace that both society and the government are unable to keep up with. The problems with the technology are often realized far later than the launch date, but that doesn’t mean blemished tech can’t provide us with solutions and perform intended purposes.
Despite their faults, technologies like AI, surveillance, and facial recognition have successfully streamlined mundane, arduous processes and improved data reliability through predictive analytics—determining what’s most likely to happen in future based on what’s happened in the past.
This means the monolithic problems humans currently face can be more easily broken down and visualized, making solutions more accessible. AI, for instance, can power through mountains of data and compile forecasts for future supply-and-demand of nonrenewable resources, create schedules for power grids to reduce carbon-dioxide emissions and avoid failures, as well as supply urban planners with information on how to build intelligent infrastructure so they can design the smart cities of the future.
“AI can help us surface patterns and understand how certain behavior contributes to climate change, while also identifying what changes we could make and how we can use energy better,” Louisa Heinrich said. “A smart city done right is a city that connects with its citizens and empowers them and enriches their lives at the same time, enabling those citizens to report their data and their observations to find opportunities for improvement.”
Holland believes technology acts as a mirror, amplifying our own existing biases, regardless of the developer’s good intentions. But a mirror is also a good thing: it can help developers identify blindspots, and make them reconsider who the user actually is and make products for them, instead of building products for folks that look like the designer (typically, cis, white men). “We’ve created a very powerful force that can be used for good, we just have to be actively choosing, as a culture, to take up that power, because for too long it’s been left in the hands of some bros in Silicon Valley,” Holland said.
Yet, the technology we develop, now or in the future, won’t matter if it’s not accessible to everyone, nor if we fail to teach people how to use it, and more importantly, how to scrutinize it. Society will perpetually be chasing tech with a fire extinguisher if the tech we develop continually fails to incorporate aspects of humanity in it.
“Technology is really only as great and successful as the people behind it,” said Heinrich, the founder of Superhuman Limited. “There’s no such thing as technology that you can build once and then it’ll be great forever. It’s not God, it’s a set of tools.”
If tech is truly a mirror, what it has shown us is that society is deeply flawed and harmful, a monolithic dilemma on par with climate change. But if we take that image and use it to create better, more inclusive tools, perhaps there is a possibility for optimism.
Large-scale societal change takes time. We are still digesting the last 10 years of technology, and we are likely to still be reeling from our existing tech-related problems a decade from now, too. We may even be eons away from a Jetsons-esque life, even though the show was rather dystopian.
The responsibility for saving humanity has long been on the intangible shoulders of technology. If we are to be saved, whatever that means, that responsibility needs to be shifted. Remember: Humans are the ones who create technology. So the question shouldn’t be will tech save us, it’s will we save ourselves.
Before you go, we hope you’ll consider supporting DAME’s journalism.
Today, just tiny number of corporations and billionaire owners are in control the news we watch and read. That influence shapes our culture and our understanding of the world. But at DAME, we serve as a counterbalance by doing things differently. We’re reader funded, which means our only agenda is to serve our readers. No both sides, no false equivalencies, no billionaire interests. Just our mission to publish the information and reporting that help you navigate the most complex issues we face.
But to keep publishing, stay independent and paywall free for all, we urgently need more support. During our Spring Membership drive, we hope you’ll join the community helping to build a more equitable media landscape with a monthly membership of just $5.00 per month or one-time gift in any amount.