Home Latest News Elon Musk's biggest worry – POLITICO – POLITICO

Elon Musk's biggest worry – POLITICO – POLITICO

How the next wave of technology is upending the global economy and its power structures
How the next wave of technology is upending the global economy and its power structures
By signing up you agree to allow POLITICO to collect your user information and use it to better recommend content to you, send you email newsletters or updates from POLITICO, and share insights based on aggregated user information. You further agree to our privacy policy and terms of service. You can unsubscribe at any time and can contact us here. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Loading
You will now start receiving email updates
You are already subscribed
Something went wrong
By signing up you agree to allow POLITICO to collect your user information and use it to better recommend content to you, send you email newsletters or updates from POLITICO, and share insights based on aggregated user information. You further agree to our privacy policy and terms of service. You can unsubscribe at any time and can contact us here. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
By KONSTANTIN KAKAES 

Updated
With help from Derek Robertson and Ben Schreckinger

Elon Musk and a person in a robot costume at Tesla’s 2021 “AI Day.” | Tesla, via YouTube
It’s not Twitter.
In 2017, at a meeting of the National Governors Association, he opined that “the scariest problem” is artificial intelligence — an invention that could pose an unappreciated “fundamental existential risk for human civilization.”
Musk has, for years, seemed to be attuned to the dangers of AI. As far back as 2014, he told students at MIT that “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
So it might seem that Musk would be very cautious in how his companies deploy AI, and how carefully they stay within guidelines.
Not exactly. Musk is a big player in AI, in part through his car business. Elon Musk has described Tesla as the world’s biggest robotics company. “Our cars are semi-sentient robots on wheels,” he said in a speech last year for an “AI day” the company held. At the event, he also announced plans to build a prototype robot sometime in 2022. The robot, he said, is intended to be friendly, and to eliminate “dangerous, repetitive, and boring tasks.” He said he’d make the robot slow enough to run away from, and weak enough to overpower.
Over the years, his car firm Tesla has not only pushed AI-powered autopilot systems beyond what regulators like the National Transportation Safety Board say is prudent, but has also failed for over four years to “implement critical NTSB safety recommendations,” according to an October 2021 letter by Jennifer Homendy, the agency chair. And as Fortune reported in February, Neuralink, a brain-chip startup that Musk also runs, may have misled federal regulators about his role. Musk says he wants Neuralink chips to help humans achieve a “symbiosis with artificial intelligence.”
Musk’s willingness to comply with securities regulations raises broader issues about how Neuralink might comply with regulations for brain-computer interfaces that experts argue urgently need to be written.
Emanuel Moss, a postdoctoral scholar at Cornell Tech and the Data & Society Research Institute, said that “it serves Musk’s interests to position himself and his companies as best able to address an elevated imagining of the risks around AI.”
In Moss’s telling, Musk argues that his companies are the “few who are capable of addressing the risks of AI in a technically astute or robust way.” But Musk, he said, “wants to sell a shiny box that solves the problems. He thinks there are technical solutions to what are in fact social problems.”
That’s also the view of Alex John London, the director of the Center for Ethics and Policy at Carnegie Mellon University, who said that “warnings about AI make industry look socially minded and are often window-dressing meant to build trust without that trust being warranted.”
Gianclaudio Malgieri, a professor at EDHEC Business School in Lille who studies AI regulation and automated decision making, said he sees Musk’s marketing strategy as “having AI as an enhancement of humanity, and not a substitution of humanity.”
But this distinction is not a clear one. People alive 50 years ago, Malgieri said, would be shocked to learn how much of our mental capacity we have already given to AI — think how easy it now is to Google basic facts or rely on GPS and AI to find directions to a friend’s house, or how thoroughly algorithmic recommendations now shape people’s musical preferences.
Immediately before Musk spoke about Tesla’s robotic ambitions at AI day, a person wearing a tight white bodysuit and blank-faced black mask rigidly walked onto the stage as though attempting to fool the audience into thinking they were a highly capable robot, before dancing maniacally to electronic music. It’s a jarring attempt to blur the lines between people and robots.
Maglieri recounted the fable of the frog in a saucepan full of water that is slowly brought to a boil, and doesn’t realize it’s going to die until it’s too late. “When do we start,” he wondered, “to give away our humanity to machines?”
Musk said at the AI day event that he wants to be able to ask a robot to go to the store to pick up groceries. The question Maglieri has is: what is lost when robots do the shopping?

Mozilla’s 2022 festival showcased the organization’s tech activism. | Business Wire
As we covered last week here at DFD, Europe is miles ahead of the States when it comes to putting regulatory guardrails in place around artificial intelligence. One Silicon Valley group providing its expertise to lawmakers and regulators is the Mozilla Foundation, which published a blog post Monday listing recommendations for the Union’s far-reaching AI Act.
The post, written by Mozilla’s executive director Mark Surman and senior policy researcher Maximilian Gahntz, points out three major areas where the act as currently written could be improved: balancing the accountability for responsible AI use between its developers and its users; writing more stringent disclosure requirements around the use of so-called “high-risk” AI technologies, and creating a means for end users to file complaints about their perceived misuse.
“Technologies that are potentially neutral, or that may have biases themselves but their design doesn’t imply an inherently high-risk activity, can [still] be used for high-risk purposes or low-risk purposes,” Surman said. “We see our role — we see the need for the commission — to wrestle to the ground the practical questions of how to deal with that; we think right now the act is just too simplistic.”
The researchers recommended that EU legislators “future-proof” the bill by broadening its scope as not to preclude potential AI-driven harms that might not even exist yet.
“They define eight areas in which this can be amended,” Gahntz said. “That unnecessarily limits the room to maneuver for the Commission and the European legislators in the future. Just because we don’t know right now that something may be risky and may harm people doesn’t mean that two years or three years from now that might not change.”
Surman and Gahntz said that European regulators have been largely receptive to their recommendations, and that they’ll continue to offer expertise as the lengthy legislation process rolls on. (The Digital Services Act recently agreed to in principle by EU legislators was first proposed in December of 2020.) As with that law and the rest of Europe’s pioneering data privacy regulation, don’t be surprised if the debates playing out in Brussels today over AI pop up again in Washington… eventually. — Derek Robertson

Following Friday’s item on the ties between crypto mogul Brock Pierce’s independent Vermont Senate run and Trump world, Pierce’s campaign sent along a statement today saying that he has parted ways with his team of Donald Trump aides over “ideological differences.” Pierce said that in addition to Steve Bannon, he has consulted with Bill De Blasio on his Senate ambitions and that his campaign is now working with a team of Democratic and independent operatives including Ben Kinsley, Tyree Morton, Jeff Leb and David Weiner. — Ben Schreckinger
Artificial gullibility? One unsettling fact emerging about AI: just as machine learning can detect patterns that humans can’t, it can also be fooled in ways that would never fool a person.
This isn’t just theoretical: In a somewhat disturbing Twitter thread last week, the writer Cory Doctorow laid out a laundry list of examples of how machine learning algorithms have been tricked and manipulated by researchers, occasionally in a crude and simplistic fashion with potentially dangerous implications:

In his thread, which he also compiled as a blog post, Doctorow recapped a recent paper showing how these “adversarial examples” — data that, when introduced to a machine learning system, causes it to malfunction — could be added to basically any such system, for any purpose — and they could be undetectable to anyone who didn’t already know where to look.
“In other words, if you train a facial-recognition system with one billion faces, you can alter any face in a way that is undetectable to the human eye, such that it will match with any of those faces,” Doctorow writes. “Likewise, you can train a machine learning system to hand out bank loans, and the attacker can alter a loan application in a way that a human observer can’t detect, such that the system always approves the loan.” — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Konstantin Kakaes ([email protected]); and Heidi Vogt ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up here. And read our mission statement here.
STEP INSIDE THE WEST WING: What’s really happening in West Wing offices? Find out who’s up, who’s down, and who really has the president’s ear in our West Wing Playbook newsletter, the insider’s guide to the Biden White House and Cabinet. For buzzy nuggets and details that you won’t find anywhere else, subscribe today.
CORRECTION: An earlier version of Digital Future Daily misstated the location of the EDHEC Business School campus where Gianclaudio Malgieri teaches. It is in Lille.
© 2023 POLITICO LLC

source

Previous articleiPhone 12, iPhone 12 Pro, and iPad Air are here – Apple
Next articleTesla Cybertruck Likely Arrives After December 2023 – Autoweek
He is well known among his circle for his incredible attraction towards smartphones and tablets. Charles is a python programmer and also a part-time Android App developer.