Pages

Tuesday, May 2, 2023

One Day

I feel good. I'm in a good mood. My pathologies are all behaving at the same time. I almost wish I could postpone surgery so I could fully enjoy this freak occurrence. 


It's not that I can hike the way I want to, I'm not healed, but mostly my foot is contained. I can feel it, but it's chill, refusing to fuel fires. I'm getting a great long tail on the MRI steroids. And as much as I miss going for walks, right now I'm so busy in the garden, walking isn't even possibly anyway. So I'm not mad.


My biggest issue is my sleep is a hot mess for several different reasons. This time of year, I wake up with the birds. I love listening to them, even though it's not really enough sleep. And my routine is all messed up from covid still and now the pre-surgery stuff too. Can't take this. Can't take that. Blah blah blah. No melatonin for meeeee.


I've been in the garden every day for the past week, working for hours. I'm racing the clock. We're due for rain. I'm due to be out of commission. 


Despite the massive increase in activity--things like literally throwing bricks and softball sized stones and then loading them up and dragging them back to storage--still haven't lost weight lol. 


You'd think there'd be a change just from the sweat.


Whatever.


Anyway, I'm pleased to see that I'm finally developing some skills and informed judgement on the gardening. I'm making progress on strategies and systems too. I'm learning! I'm getting there!


I'd say gardening is as complicated as medicine. Or right up there. Close. There's so much marketing hype and so many people trying to monetize gardening...good information is very hard to find. And there's chemistry and science, health and disease, and fifty different opinions on what to do. There's even surgery! 


I'm slowly getting to where I can see the bullshit.


I say slowly because I love gardening, but I'm bad at it. It doesn't come naturally to me. Mostly I kill things. It doesn't make sense to me. 


But it's a killer workout. Wow. I love it. The longer I go being unable to control my ability to exercise, the less interested I am in aerobics classes. Just exercise itself isn't enough now. I'm all about functional and productive fitness. Like gardening or working out specifically so gardening doesn't kill me. Or fun things like swimming or dancing. 


I want to have a good time and/or I want to have something to show for it beyond exercise. I don't get to keep any fitness I achieve or build on it because I'm constantly deconditioning due to illness, so I want some other payoff anymore or else I can't motivate myself.


Before my spine cyst threw a fit, I'd planned to hand build a cottage stone wall around our acreage. I figured it'd be a productive project for cold weather and it would take a long time to finish so I'd have something to work on for years. I gave up on it because of my spine, but if I can feel as good as I do right now for any reliable amount of time maybe I'll go back to that...


But aside from that, I have gardening. It has some ability to accommodate limitations. You can sit for some of it or do a little bit here and there. It's natural HIIT. But also pretty adaptive to a variety of needs and limitations. There are tools to make work easier and it's still HIIT. It's fairly neuroma friendly too (for me at least). 


It also builds strength like crazy and the strength lasts better than if I were lifting weights alone. (Digging holes is the best workout I've ever seen.)


At the rate I'm figuring it out, we should be self sufficient sometime in the next 1000 years. I'm going to need some of that immortality the headlines keep yapping about...


Also, acreage sounds fancy. We bought a dump of a farmhouse. We're not rich. Just frugal and handy with friends in the trades. Don't worry. Eventually medical stuff will bankrupt us. Maybe I'll do a post on that sometime...

Circling back to AI and geopolitics...


Did you notice the headlines are starting to peg some of what's happening as a BRICS thing? Hell, even TikTok has videos from just regular people talking about it now. That started a few weeks ago (maybe it's propaganda?). I'm also seeing more financial headlines talk about the 'attack' on the dollar. Part of this war is economic. I hope someone somewhere knows wtf we are doing. 


The public discourse on this tends to be very 'sides' driven as if a world run by BRICS is going to be some kind of improvement for the little guy. People are naive if they think that. BRICS will be worse than the West (which is no saint), but everyone's so anti-West they'd rather work against their own self interests. I especially raise my eyebrows at the part where different nations with little power are singing BRICS' praises as if they aren't going to be under a boot heel same as they are with the West. 


Different boot heel, more war, more dead, more environmental damage, more human potential lost, same shit. Humans just refuse to evolve here.

There aren't really any good guys in this. It's just do you want a world with dictators who have nukes or do you want a world with some churn in the power structure where no is an actual thing you can say to leaders without getting a surprise polonium tea enema? Meanwhile, wealth concentration, inequality, and corporate hegemony will continue unabated either way...

As for AI...I continue to pay attention and research and learn, learn, learn. Stuff is weird. I'm reading a book about AI and robots that's eight years old right now...The Rise of the Robots. Usually I don't like outdated info, but this time it's very interesting. Tech moves so fast, eight years is essentially a century so seeing the ancient history (as it were) provides an excellent foil.


For instance, so many AI companies were trying to make AI happen at scale and...nothing. It didn't land. I'm not sure why. But a few of the companies are still in business (I Googled), doing very niche small scale stuff, going nowhere fast, no longer the 'It Girl' of the tech industry. Not sure the latest advances in AI are going to change things.

They could've scaled before now and didn't. Why? 


Technically, you and I could have house robots already. Why don't we?


I don't know. I keep running into weirdness. Lots of overselling. Grandiose claims. Strange narratives. It's just...off.

Also, I didn't realize the origin of UBI went so far back. It was Martin Luther King who started talking about the impact of automation and the need for UBI. The problem is the tech moved much more slowly than the alarm, so the alarm went nowhere, and now that every day is a hockey stick on the growth chart, when tech is actually moving at a speed that requires a response, there's no energy for it among those who have the authority to do anything about it. It's passe. There's a mismatch of urgency.

And you can make a good argument than the wage stagflation and the income inequality destroying the 99% is because of rising automation. Companies have been keeping the profits of automation this whole time much like they've been predatory about prices and profits during Covid. (At this point, I've decided they're raising capital to implement AI and robots buuut I don't think it'll be so easily done.)

This interview with the 'Grandfather of AI' gets really weird the longer I think about it.


For one, you have a scientist who ostensibly is so concerned about his work being militarized that he moves to Canada (one of the US' big military allies) to avoid US Dept of Defense funding and then materially contributes to AI that is being militarized...by the US Dept of Defense...and everyone else. 


Like, how did he not know? Why could he not extrapolate?


And he's concerned about autonomous weapons (which have actually already been used in warfare) and keeps developing the science anyway. 


If he was so concerned, why was he doing that? Why does no one ask that question? 


You know, billionaires aren't heroes, they're hoarders. It's not normal to hoard money you'll never live long enough to spend.


Now I'm starting to wonder about the psychology of scientists. I don't use my training and skill sets to create things that will destabilize the world. I could. In this era of misinformation, I absolutely could. But I don't. However, scientists like this guy do. Often at scale.


Why are scientists so intent on setting the world on fire? They know they're doing it with AI. They're not blind. Why are they so hellbent on releasing AI that has no respect for anyone's intellectual property, little ability to fact check, and a propensity to make shit up?


Do they just want to be the next big money hoarder? Is that it? Do they think it's going to save us from ourselves in the long run and it doesn't matter who pays the price for that? Solve climate change? Cure cancer? What?


How do they reconcile the harm it can cause? Why unleash it now when they don't have all the problems resolved? Why aren't we resolving the problems before we release this tech? Why isn't it still in the sandbox? What was the decision tree on that?


I don't know. Maybe he was edited poorly. But there just a lot of WTF things in what he said. 


This piece from 60 Minutes shows how the robots are doing. They are single use technology still. They can only do one thing, but are getting better about moving across different planes of motion. Maybe as ChatGPT evolves we'll get robots that can use that movement in a variety of ways per our unique directions.


Here's how CBS blurbs the interview which I think is telling: "Competitive pressure among tech giants is propelling society into the future of artificial intelligence, ready or not."


This talk from the same folks who made The Social Dilemma is a great academic overview and offers an architecture for thinking about AI and its challenges/risks. Probably one of the best things I've seen on AI to date. And, of course, because it's the sane voice in the discourse, it's getting almost no attention.


Things I still don't see talked about enough...


-Protecting human intellectual property and copyright. No one cares about this outside the actual artists and creators who were used to train the AI.


But data scraping is so vital to AI's development that Reddit is planning to charge for access to its API to monetize AI training while offering nothing to the users who make Reddit such an AI hot spot. The inequality of how data is monetized gets no attention from AI thought leaders.


You know, the ethos used to be different. In the mid 2000s users were paid for the content they generated. I made five figures off royalty sharing and ad revenue splits on different sites. It's not like it's never been done. It's more that they realized if they don't pay, no one can do anything about it.


-Actually questioning the premise of whether these scientists unleashing all this chaos are operating ethically and in our best interests. Instead we laud and celebrate them...it's fucked up.


-When are we going to have fact checking in AI? There's some ability to request fact checking and ask for sources, but it's not automatic. In the CBS piece I linked above, AI generated a list of non fiction books that didn't exist. It shouldn't be doing that. Why isn't this technology oriented to reality?


-Still no conversation of loophole/process hacking expertise. They're just assuming everything humans do will be replaced by AI. But how will AI create financial systems to offshore money a la the Pandora Papers without creating a digital trail the IRS can download?


-Privacy. This does come up here and there but not in detail. It's ignored more often than not. However, you can't run a company off open access AI without making all your proprietary intel public. Self hosted AI is coming, but until then, companies can't fully implement it. 


-Power. No one talks about power. We have an electrical grid in the US that already doesn't work well. How much power will all this AI need and where are we going to get a reliable supply? Plus how much will it cost and who will pay for it? The cost of the power may be where we find some spots where humans are cheaper. (Or they'll triple residential electric costs to cover the grid updates and turn off  power to keep the AI moving when it suits them.)


Again, our species constantly creates undersupplied systems and then wonders why there are problems like it's an unsolvable mystery. 


Sidebar: Yo. Forget AI, there's 8 billion people now. We need to allocate more resources to things just in general anyway. (Did you know? What should be a 5 minute commute in Lagos, Nigeria--on track to be the largest city in the world--is taking 2-3 hours now? There's been a massive influx of population and it's destroyed the efficacy of their infrastructure. This is the scale of the problem in terms of resource demand. Too many people want the same things x 8 billion.)


-Cost. Anyone remember the Concorde? The plane that could fly to destinations in half the time? And how it went out of business? It was too expensive to run even with all the extra efficiency. Here we are decades later with no improvement in the tech to make it affordable either.


Same with healthcare. We have hospitals closing down all over the place, not because there's no demand, but because the economics don't work.


Can we afford to have AI do all the things? Will that even be cost efficient once you start factoring in all the costs of running AI? 


There is some talk about having levels of capability in AI. You'll have a top level with all the bells and whistles and then a worker bee level that follows patterns the top level figured out. But will that be economical enough? Can tech dumb down enough to cut costs and still be effective?


We couldn't make flying places in half the time work. We can't make healthcare work. You can actually reach a point where efficiency is too expensive. 


Once innovation is expensive, it stalls out. 


There are already problems with the economies of scale with AI. We don't have the electric grid for this and it will be expensive to build it up. We'll need a lot more servers and a lot more water and buildings for them. The costs will all be passed on and it's quite possible the vast majority of the world will be priced out of using AI as a result.


If it's costing a ~million dollars a day now and it's been implemented at, let's say 1% to make the math easy, the other 99% (unless we find some cost efficiencies at certain points, which is possible) means it will potentially cost somewhere around $100 million dollars a day to operate. I believe that will make AI the most expensive enterprise in history unless we get some major innovation on power and infrastructure that lowers costs. 


-Bias. How are we going to eliminate bias? AI is still only as good as its inputs. Which, when I look at medicine, my stomach sinks. There are a lot of biased inputs with medicine. A lot. They're embedded in the text books. Language itself is infused with bias...actual word choice and sentence structures can be heavily racist or sexist and people don't even realize it.


-War. How does a world with AI that doesn't fact check, that is full of bias go to war? And what does an AI arms race look like? (There's a part of me that wonders if Russia made their move now because of the uncertainty of AI's impact on geopolitics and power. The top levels have known about ChatGpt long before you and I ever saw it.)


-Same as we have a right to repair, we have a right to know if something was generated by AI. Where's that movement?


-I suspect we're going to see some kind of fundie religious radicalization around AI. As a preacher's kid, the first thing I thought of was the Tower of Babel, but no one seems to have leaned into it yet.  A Google search shows it has been mentioned here and there in conjunction with AI, but it hasn't gained critical mass.

It's quite possible some religious sects/demographics are going to have an Amish moment where they eschew this new technology and devote themselves to a faith driven lifestyle without it.

It's not just a Christian thing either which is why I wrote 'fundie religious'. There are actually similar stories in other major world religions just waiting to be leveraged. 


















No comments:

Post a Comment

Thanks for your comment. I read all comments and do my best to respond to questions, usually in a new post.