Would You like a feature Interview?
All Interviews are 100% FREE of Charge
Hey everyone, welcome to TechCrunch’s regular AI newsletter.
In AI news this week, the U.S. Supreme Court overturned “Chevron deference,” a 40-year-old ruling on federal agency power that required courts to defer to federal agencies’ interpretations of Congressional statutes.
The Chevron deference allowed government agencies to make their own rules when Congress had left parts of the statute vague. Now courts will be required to make their own legal decisions, with potentially far-reaching implications. Axios’ Scott Rosenberg write Parliament has never been the most Functional Today’s law enforcement agencies can no longer apply basic rules to new enforcement situations, so they must attempt to effectively predict the future through legislation.
And that could kill any attempt at nationwide AI regulation forever.
Congress has already struggled to pass a basic AI policy framework, forcing state regulators from both parties to step in. Any regulations Congress writes going forward will need to be highly specific to survive legal challenges, a seemingly insurmountable task given the speed and unpredictability of the AI industry.
Justice Elena Kagan mentioned AI specifically during oral argument.
Let’s imagine that Congress enacts an artificial intelligence bill, and there are various mandates in place. Because of the nature of things, and particularly the nature of the subject matter, there are going to be various places where Congress has effectively left gaps, even without an explicit mandate. … [D]Do we need courts or institutions to fill that gap?
Either the courts will now fill the gap, or federal lawmakers will decide the effort is futile and halt the AI bill. Whatever the outcome, regulating AI in the US has become orders of magnitude harder.
news
Google’s Environmental AI costs: Google has published its 2024 Environmental Report, an 80-plus page document that outlines the company’s efforts to apply technology to environmental issues and reduce its negative impact on the environment. But it doesn’t address the question of how much energy Google’s AI uses, Devin writes. (AI is notoriously power-hungry.)
Figma disables design features: Figma CEO Dylan Field said the company would temporarily disable its “Make Design” AI feature, which was accused of plagiarizing the design of Apple’s weather app.
Meta changes AI labels: After Meta began labeling photos as “Created with AI” in May, photographers complained that the company was mistakenly labeling real photos. Ivan reports that Meta has now changed the tag to “AI Info” across all apps to appease the criticism.
Robot cats, dogs and birds: Brian writes about how New York state is distributing thousands of robotic animals to seniors amid a “loneliness epidemic.”
Apple brings AI to Vision Pro: Apple’s plans go beyond the previously announced release of Apple Intelligence for iPhones, iPads, and Macs: Bloomberg’s Mark Gurman reports that the company is also working to bring these features to its Vision Pro mixed reality headsets.
Research Paper of the Week
Text generation models like OpenAI’s GPT-4o have become a staple in tech. do not These days, I use them for a variety of tasks, from completing emails to writing code.
But despite the popularity of these models, the science behind how these models “understand” and generate human-like sentences remains a mystery. To solve that mystery, researchers at Northeastern University saw Tokenization, the process of breaking down text into units called tokens, token This makes the model easier to manipulate.
Today’s text generation models process text as a sequence of tokens drawn from a set of “token vocabulary”. A token can correspond to a single word (“fish”) or part of a larger word (“mon” and “mon” in “salmon”). The vocabulary of tokens available to a model is typically in front The model trains based on the characteristics of the data used for training. Implicit vocabulary It maps groups of tokens (for example, multi-token words like “northeastern” or the phrase “break a leg”) into semantically meaningful “units”.
Armed with this evidence, the researchers developed techniques to “probe” the open model’s implicit vocabulary, extracting phrases like “Lancaster,” “World Cup player,” and “Royal Navy” from Meta’s Llama 2, as well as lesser known terms like “Bundesliga player.”
Although the study has not yet been peer-reviewed, the researchers believe it represents a first step toward understanding how lexical representations are formed within models and could be a useful tool for uncovering what a particular model “knows.”
Model of the Week
Meta’s research team has trained several models to create 3D assets (textured 3D shapes) from text descriptions suitable for projects like apps and video games. While there are many models that generate shapes, Meta claims it is “state-of-the-art” and supports physically based rendering, which allows developers to “relight” objects to make them appear to have one or more light sources.
The researchers generated the shapes by combining two models, AssetGen and TextureGen, inspired by Meta’s Emu image generator, into a single pipeline called 3DGen: AssetGen converts text prompts (e.g., “A T-rex wearing a green woolen sweater”) into a 3D mesh, while TextureGen enhances the “quality” of the mesh, adds texture, and generates the final shape.
3DGen, which can also be used to re-texture existing shapes, takes about 50 seconds from start to finish to generate one new shape.
“By combining [these models’] “3DGen’s strengths enable very high-quality 3D object synthesis from text prompts in under a minute,” the researchers wrote in the paper. Technical Papers“When evaluated by professional 3D artists, 3DGen’s output was almost always preferred over other products in the industry, especially for complex prompts.”
Meta appears to be looking to bring tools like 3DGen into metaverse game development. Job InformationThe company aims to research and prototype VR, AR and mixed reality games, possibly created with the help of generative AI techniques including custom shape generators.
Grab Bag
As a result of the partnership between the two companies announced last month, Apple may join OpenAI’s board of directors as an observer.
Bloomberg Reports Apple executive Phil Schiller, who is responsible for leading the App Store and Apple events, will join OpenAI’s board as the second observer, joining Microsoft’s Dee Templeton.
The move, if it goes ahead, would be a remarkable show of force for Apple, which plans to integrate OpenAI’s AI-powered chatbot platform ChatGPT into many of its devices this year as part of an expansion of its AI capabilities.
Apple pay OpenAI reportedly argued that the PR exposure it received from integrating ChatGPT was worth as much, if not more, than cash. In fact, OpenAI ended up appleApple is said to be considering a deal to get a cut of the revenue from ChatGPT’s premium features that OpenAI brings to Apple platforms.
So, as my colleague Devin Caldway pointed out, Microsoft, a close collaborator and major investor in OpenAI, is in the awkward position of essentially subsidizing Apple’s ChatGPT integration while getting very little in return: Apple seems to be getting what it wants, even if it means disputes its partners have to resolve.