Search This Blog

M89 half bot?


I know a lot of people are relieved that AI will take over the chores that we normally associate with uneasy dread, like typing up some content. I don't understand how, though. I mean, I may not be thinking a simplistic statistically markov-chain thought; it may be a memory of a past event that triggers a massive amount of emotions that I succumb to before the words come out. How will a robot ever recreate something it doesn't have access to? Sure, it can type out a lengthy sentence stringed together by Indian middle-class pain-porn in a way that obeys some kind of bell curve it's seen somewhere. But that won't be authentic.

It won't be me. I may be, in some sense, a candidate for generalization, but not my thoughts. It won't have the texture that makes me me. How can anyone find that to be acceptable? We'll all lose our inner voice and become more robot-like. That's not what we want; how does that make us better humans? Or robot? A fake, perhaps.

And if, hypothetically, we have a generation that grows up not thinking about their thoughts and just asking AI to do it for them, that's damage that can't be undone. It's not the same as a calculator, as I used to say it is. Calculators calculate what's beyond the biological imperative, but how can you let someone else start to do all the thinking for you? Especially when the learning is unfinished or unpolished. Will all the human interactions be part of a state machine diagram? That's a dependency that's crippling if AI is made absent, like not having a square root or a logarithm key on your calculator. This is a necessary human part that will atrophy or grow abnormally if we do this to kids or ourselves without thinking—a grave injustice foisted on posterity or ourselves, almost as bad as climate change, or worse, when everyone becomes a stochastic repetition, an approximation to some pleasing (or state-planned) generalization. I'd rather be this imperfect organic, impassioned tempest of faults than a smooth rolling ball that has only Newton and a few bell curves of "probably" programmed in it. I wouldn't want to be a general else; it would rob me of the human experience and plant pavlovian artifice on an otherwise dead dog.

Like right now, when I can't fall asleep and the usual boundaries that limit my worries are blurred, the writing right now is from and about this somnolent, depressed person who is burdened by anxiety and uncertainty. How would a robot write this? It can simulate the neural network, at least a good part of it, but it doesn't have a stomach to feel queasy and nauseous, the hormones, and the paraphernalia that's biological but plays an important role. And if I am mad, how will a robot writing for me help? Hiding the madness would be a detriment, wouldn't it? Maybe I bite, and a reading between the lines would protect someone from getting bitten, but not if I pretend to, through AI, be statistically kosher and safe.

Featured Post

NEW WEBSITE suvroghosh.blog

I won't use blogger anymore, posts can be found at suvroghosh.blog . I'll see everyone there. I'm building it the way I want to ...