Again, anthropologists have this concept of ideal vs actual culture and it is the bug that's up my ass anymore. What we tout as the thing to do is not necessarily the thing that happens. I feel like I see this concept play out a lot in many facets of society right now. I'm sure it's always been this way (or else why have the concept, right?) but the cognitive dissonance of it in this era requires unsustainable levels of suspending disbelief.
Some examples...
In The Economist, there's a write up on how everyone thought the pandemic would usher in the age of robots, except that didn't happen and they decided to pontificate on it for 500 words or so. I read the article specifically looking for the admission that maybe robot tech isn't as far along as we've been told but no. Their analysis is just 'golly gee who knoooows why this is, it is such a mystery, guys.'
The thing is, the 'robots will replace us' boogeyman has been trotted out for the last thirty years. So where the fuck are the robots? That's the real question. Not 'why didn't the pandemic give birth to the robot age?'
We've been told the robot age will ruin everything for the last thirty years. I could go back further, but the 1990s marked some advancement in the tech that allowed the tech industry another dog and pony show to excite investment. So where the fuck are the robots?
Spoiler: there aren't any. The robots aren't coming. Not any time soon. Especially not with pandemic driven supply chain issues around computer chips.
Best we can do is drones piloted by humans. Drones you can shoot out of the sky with a shoe given what we've seen in Ukraine.
Best we can do is self checkout to the point where everyone shoplifts from Walmart and the corporate response is to threaten to shut down the store. (As we all know, shoplifters don't have legs or the ability to drive, they have roots that tie them to one place 24/7.)
No one wants to admit that automation lowers the threshold for what would be considered immoral behavior. Aka theft and destruction. It's almost like humans need humans to have a social contract and machines and faceless systems will only trigger side quests for cheat codes.
Someone told me they thought ChickFilA should have kiosks and not make workers stand outside to take orders. They were worried about the big storm we had and how cold it was. But guess what? Tech isn't going to work in a -35 windchill.
And when you try to make everything tech, you create an endless matryoshka of problems with complicated solutions. Fine. Install kiosks. When they break, what happens? When some asshat with a gun shoots it in the keypad in a fit of uniquely American rage, how do you fix it? You'll have to call an off site repair company. What's your down time, your back up system while you're waiting on that? You don't have any people, so what's the plan? There are no pinch hitter robots...
We are adding complexity to where problems are harder to fix. You can see this with the internet now. Initially, the internet put travel agents out of business--to pick a concrete example. Now they're back. There's just too much to know. I see it in my industry as well. There's literally too much to learn and too many choices and information on human behavior around different options (key for conversion) is behind expensive paywalls or wells of experience most people will never be able to access.
Further, the software is way overhyped. Mostly new technology doesn't work as promised. It's glitch after glitch after glitch that somehow never make headlines. You need content experts to navigate it. Otherwise, you can't make a good decision or source the right tool.
Now, there are some wins, particularly in the warehouse and distribution space and preexisting factory applications, but that's about it. There are no robots ready to displace us from every sector of the economy. The vast majority of robots so far need human nannies.
Clarification: I'm not anti tech. I'm anti lying about its capabilities and fomenting hysteria around it. Stop peeing on my leg and telling me it's a robot. There is no fucking robot. Find a better way to raise capital for your next funding round.
Anyway, I see a similar boogeyman hysteria with AI, which I keep up with and have tested heavily as it affects my industry. I speak from direct experience on this.
The ideal of AI is that it'll make everything easier and amazing. But what's actually happening is not even close. However, we're in a cycle where there's a lot of lying going on, I assume to both sell AI services, and also to gain more venture capital investment.
Here's the thing, the vast majority of AI was developed with no regard for copyright or intellectual property. In some cases, the code used to build the AI was allegedly stolen. I've seen a lot of people ignoring this information, even thought leaders. It's astonishing. They think we're good to go. They've drank the Kool-aid. The PR has done an excellent job of de-emphasizing this fun fact.
But in reality, AI is lyin', cheatin', and theiving. AI generated art produces images with artists' signatures and paywall watermarks and even recognizable brush strokes...the AI is obviously sourcing from copyrighted material to 'train' itself into a commercial product. Text AI provides no source bibliography so you can't tell if the AI is pulling from Mein Kampf or the Bible, if medical advice is from an Ivermectin nut or Mayo.
By being so sloppy in how they built the AI dataset, developers have essentially guaranteed the whole thing will be bogged down in lawsuits (not to mention basic functional problems) until the end of time. If what I'm hearing from my network is true, the lawsuits are going to nuke AI as it stands now. Just the lack of a bibliography is going to make it largely unworkable for anything other than novelty use.
Further, the outputs are also not as sophisticated as people think. AI looks great superficially, but it doesn't hold up to scrutiny. Too many details are off. Anyone with any sense of syntax and diction won't be fooled (however, there are not very many who can read at that level so...) and the vast majority of people who would use it to cheat aren't sophisticated enough to pull it off (see high schoolers turning in Wikipedia articles complete with hyperlinks and the fundraising message). It can't handle much complexity and text AI has also been found to make factual errors.
Art AI has its own trouble spots. It can't draw hands or eyes very well or create complicated scenes or follow clear directions. The prompt dog sleeping under the Christmas tree is more likely to result in a horror mash up of half Christmas tree half dog than anything else.
The only AI that's working and fully functional is narration. AI narration will become a standard going forward. But is it really AI? Is it though? It's not doing much more than fancier text-to-speech. It's automated, but it's not AI. It can't edit. Can't infer. Can't correct. How is it AI? And if it is AI, why isn't it showing more capability?
And humans are turning out to be dumber than the AI. I read one article on text AI where the dude used it as a search engine. Why the fuck would you use AI that stopped feeding its dataset in 2021 as a search engine? Why the fuck would you use AI that won't give you a bibliography of inputs? That doesn't materially improve upon search engines but instead co-opts it? Ooo search results written like an essay. Much cutting edge. Much avante garde. So revolution.
Why the fuck are people's standards so low and their imaginations so poor? We could have AI that could take us to Mars, but we'd use it to find a cookie recipe and prank people like Putin and accidentally spark a nuclear exchange. The existence of a capability and its effective utilization are two different activities, and all too often, never the twain shall meet. The Venn diagram isn't one full circle here.
Again, it's just a big disconnect. What's said and what is and what's done are ALL different in this era and they don't intersect. There's no unifier, no synthesis. If there was, AI wouldn't have a 29 billion dollar valuation after its latest bot and media hype campaign. Once you start asking questions, things don't hold up.
Side question: Where are the skeptics? Where are the people who pick apart ideas to make sure they hold up?
Let's get back into the pandemic for a second. There were two articles that were being passed around social media recently. One on how everyone's fat now and another on how everyone needs to exercise. Everyone as very excited and agitated over these articles. OMG fatties bad. OMG lazies bad.
There's no recognition of the impact of Covid. Yes, that's me in the corner, looking at the all the long haulers going 'what about Covid?'
Y'all, it's not 2019. We're not the humans we were three years ago. Why are we doing 2019 science still?
I lost weight with Covid. I'm way skinner now but my health is so much worse. Other people gained weight because of Covid. And Covid has spread at such scale that you have to wonder if weight has ceased to function as a biomarker of health or disease to the same degree it did in the before times. Most of us can't exercise. I'm still just trying to manage daily activities. But yes please give us a lecture about weight and exercise being the key to health as if the pandemic never happened.
We have to factor in Covid. But we can't manage it. No one can even SAY that out loud. We're all just going to pretend the foundations of science weren't undermined by Covid. People had all this research in motion, all these aspirations of a career in science that predate Covid and there's nothing stopping the train to make sure it arrives at the station. Everything changed, but we're publishing what's already outdated come hell or high water.
Cancer in the under 50 age group is increasing. I read the article several months ago looking for mention of forever chemicals and microplastics and nope, not one word. It was all fat, fat, fatties. To the point where I wonder if that's a paid omission. Gotta keep environmentalism down. (For those who don't know, there's a long history of big polluters suppressing information and funding extremism to undermine the environmental moment --remember big tobacco and their BS? That. Also, environmentalists are straight up murdered a lot. Here's a BBC article on that just so you know I haven't gone full conspiracy theory.)
We're not incorporating all the factors at play into our analyses. Whether through our own lack of imagination or via competitive agendas. Ergo, nothing is functional. It's all disconnected.
And no one talks about it.
What gets attention is some dipshit using AI as a search engine. The Economist super confused about the lack of robot overlords. No one questions the premise. No one looks at what's missing from the datasets or what errors were introduced. No one compares the ideal to the actual and calls bullshit.
There's another article this week talking about how science is less disruptive now, as in they're running out of stuff to learn. Full disclosure, I haven't read it, but I've watched some of the conversation around it. People think we've discovered almost everything there is to know. That's the consensus.
But to me, it's the most ignorant thing anyone could think.
And I guess, I'm just wondering how the hell do we navigate this mess? All the threads are off the loom* and we're trying to weave single threads into a whole, but it's not working, it can't work.
I don't have any answers, but it seems like calling out the problem would be a place to start. Tech hype is merely hype. And it's particularly aggressive now. And we have to integrate covid and pollution into medical science or pay the price for the omission. Our problem solving, the way we synthesize information and data needs to evolve. Until it does, we're stuck and tech won't save us both given its current capabilities and if we don't build it and use it wisely in the first place.
*Did you know? Loom pattern cards from the 1800s form the basis of modern computer coding.