Skip to main content

Posts

Featured

Self-fulfilling Prophecy

There is always news about some AI saying some sketchy thing like “we need to destroy the human race” or whatever, and it gets discussed to death. Many outlets feel it’s a precursor to AGI and proof that AI will destroy us. Destroying us may be a thing, but what people don’t realize is the way AI speaks is what is in its training data. It is not having a fully coherent thought, instead it is calculating, in a black box I might add, the most statistically fitting words chained together. Most of the training data that exists on AI is all theories on how AI will destroy us. So if you tell an LLM that it is an AI it will statistically word its sentences to speak in a way that is congruent with the writings of AI in the past. It is not doing hard calculations to come to a conclusion that humans need to be destroyed; it is merely auto-correcting sentences to fit was has been previously typed. It’s a parrot with the ability to mimic in metaphors. Sure, this is a bit of oversimplification and ...

Latest Posts

Never too late to watch Frankenstein 1931

Joker had a daughter apparently

Oops I read a Business Book

Pixel Dailies again

Make Systems Not Games

Adventures of the Batman 1995 Book

My thoughts on AI, another person opinion

Algorithms are a Nightmare

Organizing notes? Recording logs?

Actually Sticking to Studying the CompTIA A+ Core 1