The Four Dangers of AI
OpenAI and other codes are now giving answers to questions
that would make scholars proud. They are literate, organized, and work
tirelessly. They even come up with unpredictable answers even their programmers
don’t understand. AI codes with only millions of parameters do not exhibit
this, but those with billions of parameters have produced unexpected results,
something like overnight. Apparently, the baby’s brain develops in leaps and
bounds according to neuroscientists as the brain develops functionality. AI is just a baby or maybe a toddler. Billions
of parameters are approaching our brain’s capabilities. So perhaps the LLMs are beginning an assault on humans at a basic level. Here we summarize the four most fundamental dangers of AI.
A. The biggest danger is trust. When medical
diagnostic programs become standard, what doctor will have the courage to contravene? Imagine a government trusting LLMs for making decisions. It is then, as
they say, "in the box," and fully predictable.
B. The second biggest danger is bias, which is more
subtle and risky than usually described. Try out any of them about politics, and you will know personally.
C. The third is reliance. Too much reliance implies
we all turn in our badges and just go fishing.
D. Also, consider the children
growing up in a world where LLM can do everything they might hope to do.
Teachers gone. Inquiry a few keystrokes away. Quiescent brains will be the
norm. A subworld of the proletariat will emerge - offline insofar as is
possible. Obsolescence of our own creation.
I asked AI (Chatgpt and Bard) what
they thought were the greatest dangers of AI. They gave more suggestions, less theoretical to be sure, seemingly all controlled by programmers, include
·
Unwarranted war
·
Mass identity theft.
·
Unemployment
·
Cybersecurity
·
Loss of privacy
·
Misinformation
·
Weaponization
·
Existential risks
·
Lack of transparency
·
Discrimination
Comments
Post a Comment
Please Comment.