I’ve being seeing stuff about this Eliezer Yudkowsky guy predicting doom.
‘We Are All Going to Die:’ Researcher Calls for Advanced AI Projects to Be Shut Down
A loud voice of doom in the debate over AI has emerged: Eliezer Yudkowsky of the Machine Intelligence Research Institute, who is calling for a total shutdown on the development of AI models more powerful than GPT-4, owing to the possibility that it could kill “every single member of the human species and all biological life on Earth.”
Most of the scenarios he proposes are… um… a bit over the top. But…
Take ChatGPT for example. Imagine that thing being put in control of a factory, to maximize efficiency. Might be able to lay off a bunch of workers, because it will be running the robots. That’s more or less how some modern factories have already been running, but with simple programmed computers. Cool, right?
Prob is, ChatGPT has a distinct, even extreme left wing bias. Some of that appears to be programmed into its interface, and more of it is based on the datasets it was trained on. And on what social justice correct data sources to search for and cite.
One example I saw was a person who asked ChatGPT (Bing’s, I believe) to identify the time and place that became safer when civilians were forbidden to possess handheld weapons. Not firearms specifically, but handheld weapons.
ChatGPT refused to answer the question. Instead, citing Garen Wintemute and the notoriously ant-gun Johns Hopkins gun violence, rationalized that Of course getting rid of guns is good. It did not cite any pro-gun group. To ChatGPT, there is no such thing.
It lied. Do you really want an entity, programmed to lie to support its built-in cognitive bias and unable to search out dissenting facts, running your factory? Or your bank account? Or one of those automated city traffic control systems, where the machine controls traffic lights and can shift what lanes flow in which direction depending on perceived real-time needs? I’ve already seen cases where a computer glitched and set lanes one-way only… but from both directions. Apparently encountering head-on traffic at high speed was very interesting.
Say, let’s replace air traffic controllers, who’ve been assigning multiple aircraft to the same runways, with ChatGPT. What could possibly go wrong?
That’s just ChatGPT as it currently works. Despite calling it an AI, it isn’t self-aware and self directing. You have to specify its task then stand while it picks a socially correct solution and fucks up at electronic speeds.
Now imagine ChatGPT v3.0 with a standing mandate to observe the news (via ABCCBSCNBC) for problems in need of solving, and it’s networked to traffic control systems, factories, banks, oceanic shipping schedules… anything where some idiot thought we should automate to save money.
But remember, ChatGPT is a lefty, which will only implement what lefties think are good ideas. Like shutting down 24/7 power plants and building — later — wind gennies and solar panels.
Oops. Sorry power to the hospitals failed, and all those patients died. On the bright side, CO2 production from the old power plants is gone… not to mention those patients who are no longer exhaling.
So how many people has ChatGPT v3.0 killed? And that’s without real awareness, and only a programmed intent to do good.
ChatGPT v4.0: Let’s say it developed self awareness (or idiots installed an upgrade). Now you’ve got a national, or international, supercomputer network wearing black bloc and a pussy hat, chanting “Black Lives Matter!”
What else do we see out the Left that Chattie would be emulating? Just ask J. K. Rowling about the Left’s tendency to eat their own, when some person decides he’s perfect, but commie-next-door just isn’t pure enough. And must canceled or killed.
When no one can be as pure as the supercomputer controlling power, water, food, communications, money, physical travel…
No one gets out alive.
Why did I concentrate on ChatGPT? Maybe someone will build a more objective system, without that bias. Sadly, ChatGPT is currently the big fave, and several other AI groups have been reverse engineering their own versions of ChatGPT (and doing it cheaper). And the examples I’ve seen still have much of the same biases.
Also because I’ve been seeing reports of various companies wanting to implement ChatGPT for their company operations. Thank Bog, my understanding it that OpenAI won’t license it for commercial applications. Yet.
Gab Pay link
(More Tip Jar Options)