Import AI 278: Can we ever trust an AI?; what the future of semiconductors looks like; better images of AI #AI - The Entrepreneurial Way with A.I.

Breaking

Monday, December 27, 2021

Import AI 278: Can we ever trust an AI?; what the future of semiconductors looks like; better images of AI #AI

#A.I.

Writing a blog about AI? Use these images:
…No more galaxy brain!…
Here’s a cool project: Better Images of AI, a project to create CC-licensed stock images that journalists and others can use to give people a more accurate sense of AI and how it works. “Together we can increase public understanding and enable more meaningful conversation around this increasingly influential technology,” says the website.
  Check out the gallery (Better Images of AI).

####################################################

Deepfake company raises $50m in Series B round:
…Synthetic video company Synthesia…
Synthetic video startup Synthesia has raised $50m. Remember, a few years ago we could barely create crappy 32X32 pixelated images using GANs. Now, there are companies like these making production-quality videos using fake video avatars with synthetic voices, able to speak in ~50 languages. “Say goodbye to cameras, microphones and actors!” says the copy on the company’s website. The company will use the money to continue with its core R&D, building what the founder terms the “next generation of our AI video technology w/ emotions & body language control.”. It’s also going to build a studio in London to “capture detailed 3D human data at scale.”

Why this matters: The world is filling up with synthetic content. It’s being made for a whole bunch of reasons, ranging from propaganda, to advertising, to creating educational materials. There’s also a whole bunch of people doing it, ranging from individual hobbyists, to researchers, to companies. The trend is clear: in ten years, our reality will be perfectly intermingled with a synthetic reality, built by people according to economic (and other) incentives.
  Read the twitter thread from Synthesia CEO here (Twitter).
  Read more: Synthesia raises $50M to leverage synthetic avatars for corporate training and more (TechCrunch).

####################################################

Do language models dream of language models?
…A Google researcher tries to work out if big LMs are smart – their conclusions matt surprise you…
A Google researcher is grappling with the question of whether large language models (e.g, Google’s LaMDA), understand language and have some level of sentience. In an entertaining blog post, he wrestles with this question, interspersing the post with conversations with a LaMDA agent. Some of his conclusions are that the model is essentially bullshitting – but the paradox is we trained it to give a convincing facsimile of understanding us, so perhaps bullshitting is logical?

Do language models matter? I get the feeling that the author thinks language models might be on the path to intelligence. “Complex sequence learning may be the key that unlocks all the rest,” they write. “Large language models illustrate for the first time the way language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals.”

Why this matters: I think large language models, like GPT3 or LaMDA, are like extremely dumb brains in jars with really thick glass – they display some symptoms of cognition and are capable of surprising us, but communicating with them feels like talking to something with a hard barrier in-between us and it, and sometimes it’ll do something so dumb you remember it’s a dumb brain in a weird jar, rather than a precursor to something super smart. But the fact that we’re here in 2021 is pretty amazing, right? We’ve come a long way from Eliza, don’t you think so?
  Read more: Do large language models understand us? (Blaise Aguera y Arcas, Medium).

####################################################

What the frontier of safety looks like – get AIs to tell us when they doing things we don’t expect:
…ARC’s first paper tackles the problem of ‘Eliciting Latent Knowledge’ (ELK)…
Here’s a new report from ARC, an AI safety organization founded this year by Paul Christiano (formerly of OpenAI). The report is on the topic of ‘Eliciting latent knowledge: How to tell if your eyes deceive you’, and it tackles the problem of building AI systems which we can trust, even if they do stuff way more complicated than what a human can understand.

What the problem is: “Suppose we train a model to predict what the future will look like according to cameras and other sensors. We then use planning algorithms to find a sequence of actions that lead to predicted futures that look good to us,” ARC writes. But some action sequences could tamper with the cameras so they show happy humans regardless of what’s really happening. More generally, some futures look great on camera but are actually catastrophically bad. In these cases, the prediction model “knows” facts (like “the camera was tampered with”) that are not visible on camera but would change our evaluation of the predicted future if we learned them. How can we train this model to report its latent knowledge of off-screen events?”

Why this matters: Problems like ELK aren’t going to be solved immediately, but they’re sufficiently complicated and broad that if we come up with approaches that help us make progress on ELK, we’ll probably be able to put these techniques to work in building far more reliable, powerful AI systems.
  Read more: ARC’s first technical report: Eliciting Latent Knowledge (Alignment Forum).

####################################################

Check out the future of semiconductors via HotChips:
…After a decade of homogeneity, the future is all about heterogeneous compute training common AI models…
What do NVIDIA, Facebook, Amazon, and Google all have in common? They all gave presentations at the premiere semiconductor get-together, Hot Chips. The Hot Chips 22 site has just been updated with copies of the presentations and sometimes videos of the talks, so take a look if you want to better understand how the tech giants are thinking about the future of chips.

Some Hot Chips highlights: Facebook talks about its vast recommendation models and their associated infrastructure (PDF); Google talks about how it is training massive models on TPUs (PDF); IBM talks about its ‘Z’ processor chip (PDF); and Skydio talks about how it has made a smart and semi-autonomous drone (PDF).

Why this matters: One side-effect of the AI revolution has been a vast increase in the demand by AI models for increasingly large amounts of fast, cheap compute. Though companies like NVIDIA have done a stellar job of converting GPUs to work well for the sorts of parallel computation required by deep learning, there are more gains to be had from creating specialized architectures.
  Right now, the story seems to be that all the major tech companies are building out their own distinct compute ‘stacks’ which use custom inference and training accelerators and increasingly baroque software for training large models. One of the surprising things is that all this heterogeneity is happening while these companies train increasingly similar neural nets to one another. Over the next few years, I expect the investments being made by these tech giants will yield some high-performing, non-standard compute substrates to support the next phase of the AI boom.
  Check out the Hot Chip 33 presentations here (Hot Chips site).####################################################

Tech Tales:

Noah’s Probe
[Christmas Day, ~2080]

Humans tended to be either incompetent or murderous, depending on the length of the journey and the complexity of the equipment.

Machines, however, tended to disappear. Probes would just stop reporting after a couple of decades. Analysis said the chance of failures wasn’t high enough to justify the amount of disappeared probes. So, we figured, the machines were starting to decide to do something different to what we asked them to.

Human and machine hybrids were typically more successful than either lifeform alone, but they still had problems; sometimes, the humans would become paranoid and destroy the machines (and therefore destroy themselves). Other times, the computers would become paranoid and destroy the humans – or worse; there are records of probes full of people in storage which then went off the grid. Who knows where they are now.

So that’s why we’re launching the so-called Noah’s Probes. This series of ships tries to fuse human, animal, and machine intelligence into single systems. We’ve incorporated some of the latest in mind imagining techniques to encode some of the intuitions of bats and owls into the ocular sensing systems; humans, elephants, whales, and orangutans for the mind; octopi and hawks for navigation; various insects and arachnids for hull integrity analysis, and so on.

Like all things in the history of space, the greatest controversy with Noah’s Probes relates to language. Back when it was just humans, the Americans and the Russians had enough conflict that they just decided to make both their languages the ‘official’ language of space. That’s not as easy to do with hybrid minds, like the creatures on these probes. 

Because we have no idea what will work and what won’t, we’ve done something that our successors might find distasteful, but we think is a viable strategy: each probe has a device that all the intelligences aboard can access. The device can output a variety of wavelengths of energy across the light spectrum, as well as giving access to a small sphere of reconfigurable matter that can be used to create complex shapes and basic machines.

Our hope is, somewhere out in that great darkness, some of the minds adrift on these probes will find ways to communicate with eachother, and become more than the sum of their parts. Our ancestors believe that we were once visited by angels who communicated with humans, and in doing so helped us humans be better than we otherwise would’ve been. Perhaps some of these probes will repeat this phenomena, and create something greater than the sum of its parts.

Things that inspired this story:
Peter Watts Blindsight; Christmas; old stories about angels and aliens across different religions/cultures; synesthesia; multi-agent learning; unsupervised learning.



via https://AIupNow.com

Jack Clark, Khareem Sudlow