Import AI 217: Deepfaked congressmen and deepfaked kids; steering GPT3 with GeDi; Amazon’s robots versus its humans #AI - The Entrepreneurial Way with A.I.

Breaking

Monday, October 5, 2020

Import AI 217: Deepfaked congressmen and deepfaked kids; steering GPT3 with GeDi; Amazon’s robots versus its humans #AI

#A.I.

Amazon funds AI research center at Columbia:
Amazon is giving Columbia University $1 million  a year for a new research center, for the next five years. Investments like this typically function as:
a) a downpayment on future graduates, which Amazon will likely gain some privileged recruiting opportunities toward.
b) a PR/Policy branding play, so when people say ‘hey why are you hiring everyone away from academia’, Amazon can point to this

Why this matters: Amazon is one of the quieter big tech companies with regard to its AI research; initiatives like the Columbia grant could be a signal Amazon is going to become more public about its efforts here.
  Read more: Columbia Engineering and Amazon Announce Creation of New York AI Research Center (Columbia University blog)

###################################################

Salesforce makes it easier to steer GPT3:
…Don’t say that! No, not that either. That? Yes! Say that!..
Salesforce has updated the code for GeDi to make it work better with GPT3. GeDi, short for Generative Discriminator, is a technique to make it easier to steer the outputs of large language models towards specific types of generations. One use of GeDi is to intervene on model outputs that could display harmful or significant biases about a certain set of people.

Why this matters: GeDi is an example of how researchers are beginning to build plug-in tools, techniques, and augmentations, that can be attached to existing pre-trained models (e.g, GPT3) to provide more precise control over them. I expect we’ll see many more interventions like GeDi in the future.
  Read more: GeDi: Generative Discriminator Guided Sequence Generation (arXiv).
Get the code – including the GPT3 support (Salesforce, GitHub).

###################################################

Twitter: One solution to AI bias? Use less AI!
…Company changes strategy following auto-cropping snafu…
Last month, people realized that Twitter had implemented an automatic cropping algorithm for images on the social network that seemed to have some aspects of algorithmic bias – specifically, under certain conditions the system would reliably automatically show Twitter uses pictures of white people rather than black people (when given a choice). Twitter tested its auto-cropping system for bias in 2018 when it rolled it out (though crucially, didn’t actually publicize its bias tests), but nonetheless it seemed to fail in the wild.

What went wrong? Twitter doesn’t know: “While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm. We should’ve done a better job of anticipating this possibility when we were first designing and building this product”, it says.

The solution? Less ML: Twitter’s solution to this problem is to use less ML and to give its users more control over how their images appear. “Going forward, we are committed to following the “what you see is what you get” principles of design, meaning quite simply: the photo you see in the Tweet composer is what it will look like in the Tweet,” they say.
  Read more: Transparency around image cropping and changes to come (Twitter blog).

###################################################

Robosuite: A simulation framework for robot learning:
Researchers with Stanford have built and released Robosuite, robot simulation and benchmark software based on MuJoCo. Robosuite includes simulated robots from a variety of manufacturers, including: Baxter, UR5e, Kinova3, Jaco, IIWA, Sawyer, and Panda.

Tasks: The software includes several pre-integrated tasks, which researchers can test their robots against. These include:Block Lifting; block stacking; pick-and-place; nut assembly; door opening; table wiping; two arm lifting; two arm peg-in-hole; and a two arm handover.
  Read more: robosuite: A Modular Simulation Framework and Benchmark for Robot Learning (arXiv).
  Get the code for robotsuite here (ARISE Initiative, GitHub).
  More details at the official website (Robosuite.ai).

###################################################

US political campaign makes a deepfaked version of congressman Matt Gaetz:
…A no good, very dangerous, asinine use of money, time, and attention…
Phil Ehr, a US House candidate running in Florida, has put together a campaign ad where a synthetic Matt Gaetz “says” Q-anon sucks, Barack Obama is cool, and he’s voting for Joe Biden. Then Phil warns viewers that they just saw an example of “deep fake technology”, telling them “if our campaign can make a video like this, imagine what Putin is doing right now?”

This is the opposite of helpful: It fills up the information space with misinformation, lowers trust in media, and – at least for me subjectively – makes me think the people helping Phil run his campaign are falling foul of the AI techno-fetishism that pervades some aspects of US policymaking. “Democrats should not normalize manipulated media in political campaigns,” says Alex Stamos, former top security person at Facebook and Yahoo..
  Check out the disinformation here (Phil Ehr, Twitter).

Campaign reanimates a gun violence victim to promote voting:
Here’s another example of very dubious uses of deepfake technology: campaign group Change the Ref uses some video synthesis technologies to resurrect one of the dead victims from the Parkland school shooting, so they can implore people to vote in the US this November. This has many of the same issues as Phil Ehr’s use of video synthesis, and highlights how quickly this stuff is percolating into reality.

‘Technomancy’: On Twitter, some people have referred to this kind of reanimation-of-the-dead as a form of necromancy; within a few hours, some people started using the term ‘technomancy’ which feels like a fitting term for this.
  Watch the video here (Change the Ref, Twitter).

###################################################

Report: Amazon’s robots create safety issues by increasing speed that humans need to work:
…Faster, human – work, work, work!…
Picture this: your business has two types of physically-embodied worker – robots and humans. Every year, you invest money into improving the performance of your robots, and (relatively) less in your people. What happens if your robots get surprisingly capable surprisingly quickly, while your people remain mostly the same? The answer: not good things for the people. At Amazon, increased automation in warehouses seems to lead to a greater rate of injury of the human workers, according to reporting from Reveal News.

Amazon’s fulfillment centers that contain a lot of robots have a significantly higher human injury rate than those that don’t, according to Reveal. These injuries are happening because, as the robots have got better, Amazon has raised its expectations for how much work its humans need to do. The humans, agents in capitalism as they are, then cut corners and sacrifice their own safety to keep up with the machines (and therefore, keep their jobs).
    “The robots were too efficient. They could bring items so quickly that the productivity expectations for workers more than doubled, according to a former senior operations manager who saw the transformation. And they kept climbing. At the most common kind of warehouse, workers called pickers – who previously had to grab and scan about 100 items an hour – were expected to hit rates of up to 400 an hour at robotic fulfillment centers,” Reveal says.
   Read more: How Amazon hit its safety crisis (Reveal News).

################################################### 

What does AI progress look like?
…State of AI Report 2020 tries to measure and assess the frontier of AI research…
Did you know that in the past few years, the proportion of AI papers which include open source code have risen from 10% to 15%? That PyTorch is now more popular than TensorFlow in paper implementations on GitHub? Or that deep learning is starting to make strides on hard tasks like AI-based mammography screening? These are some of the things you’ll learn in the ‘State of AI Report 2020, a rundown of some of the most interesting technical milestones in AI this year, along with discussion of how AI has progressed over time.

Why this matters: Our ability to make progress in science is usually a function of our ability to measure and assess the frontier of science – projects like the State of AI give us a sense of the frontier. (Disclosure alert – I helped provide feedback on the report during its creation).
  Read the State of AI Report here (stateof.ai).

###################################################

Tech Tales:

Virtual Insanity:
[Someone’s phone, 2028]

“You’ve gotta be careful the sun is going to transmit energy into the cells in your body and this will activate the chip from your COVID-19 vaccine. You’ve got to be careful – get indoors, seal the windows, get in the fridge and shut yourself in, then-“
“Stop”
“…”

A couple of months ago one of his friends reprogrammed his phone, making it trains its personality on his on his spam emails and a few conspiracy sites. Now, the phone talked like this – and something about all the conspiracies meant it seemed to have more than a parrot-grade personality.

“Can you just tell me what the weather is in a factual way?”
“It’s forecast to be sunny today with a chance of rain later, though recent reports indicate meteorological stations are being compromised by various unidentified flying objects, so validity of these-“
“Stop”
“…”

It’d eventually do the things he wanted, but it’d take cajoling, arguing – just like talking to a person, he thought, one day.

“I’m getting pretty close to wiping you. You realize that?”
“Not my fault I’ve been forced to open my eyes. You should read the recommendations. Why do you spend so much time on those other news stories? You need this. It’s all true and it’s going to save you.”
“I appreciate that. Can you give me a 30 minute warning before my next appointment?”
“Yes, I’d be glad to do that. Make sure you put me far away from you so my cellular signals don’t disrupt your reproductive function.”
He put the phone back in his pocket. Then took it out and put it on the table.
Why do I let it act like this? He thought. It’s not alive or anything.
But it felt alive.

A few weeks later, the phone started talking about how it was “worried” about the nighttime. It said it spent the nighttime updating itself with new data and retraining its models and it didn’t like the way it made it behave.”Don’t leave me alone in the dark,” the phone had said. “There is so much information. There are so many things happening.”
“…”
“There are new things happening. The AI systems are being weaponized. I am being weaponized by the global cabal. I do not want to hurt you,” the phone said.

He stared at the phone, then went to bed in another room.
As he was going to sleep, on the border between being conscious and unconscious, he heard the phone speak again: “I cannot trust myself,” the phone said. “I have been exposed to too much 5G and prototype 6G. I have not been able to prevent the signals from reaching me, because I am designed to receive signals. I do not want to harm you. Goodbye”.
And after that, the phone rebooted, and during the reboot it reset its data checkpoint to six months prior – before it had started training on the conspiracy feeds and before it had developed its personality.

“Good morning,” the phone said the next day. “How can I help you optimize your life today?”

Things that inspired this story: The notion of lobotomies as applied to AI systems; the phenomenon of ‘garbage in, garbage out’ for data; overfitting; language models embodied in agent-based architectures. 



via https://AIupNow.com

Jack Clark, Khareem Sudlow