Generative AI how Meta, Google and Snap will use the AI tools
I’m frankly feeling quite fatigued by social media at the moment—thanks to the deeply unproductive Twitter/X drama and the fact that my Facebook feed is seemingly 80% sponsored content these days—but I’ll try to maintain a presence on Bluesky, at least for a while. Separately, check out my colleague Kylie Robison’s article on Bluesky’s first big test. Some users were signing up with racial slurs in their usernames, and a couple of the Twitter clone’s investors were unhappy with CEO Jay Graber for not speaking up.
One of Meta’s fastest-growing categories is business messaging on platforms like WhatsApp, where the firm has envisioned the potential for future-facing tech like virtual agents. Customer experience is one of the most valued use cases for generative AI among executive leaders, a recent Gartner survey found. CM3Leon’s architecture uses a decoder-only transformer akin to well-established text-based models. However, what sets CM3Leon apart is its ability to input and generate both text and images.
A.I.-Generated Versions of Art-Historic Paintings Are Littering Google’s Top Search Results
Meta has long pursued an open-source approach with its AI initiatives and is known to be one of the biggest contributors to the industry. This year alone it has released numerous AI models and training datasets to the AI community. OpenAI especially has made great strides this year, with ChatGPT taking the internet by Yakov Livshits storm and the company following with moneymaking services on the back of it, such as ChatGPT Plus and its recently launched ChatGPT for Business tool. In addition, the Microsoft Corp.-backed startup has been encouraging other companies to build atop its GPT 4 model, which is the foundation on which ChatGPT is built.
- Meta is currently working with MetaGen, which provides APIs for Meta’s text and image generation models for experimental use and prototyping.
- We need to invest heavily here,” wrote new head of infrastructure Santosh Janardhan, an attendee.
- While metaverse creation is on the company’s long-term plan, generating more ad revenue is probably the need of the hour.
As Meta expands its services beyond social media, this AI toolset is set to revolutionize the advertising industry. By enhancing the process of content creators with the help of these generative AI features, Meta aims to strengthen engagement, streamline the creative process and provide brands with the resources they need to connect with the target audience quickly and effectively. The company is adding the opt-out tool as generative AI technology is taking off across tech, with companies creating more advanced chatbots and turning simple text into sophisticated answers and images.
Meta says new focus on generative AI could boost metaverse development: Nikkei
There were two big problems here—moderation is hard as a platform scales up, but there was also Graber’s failure to properly apologize for the racist-handle issue. After a lot of pressure, she apologized to the community for Bluesky’s moderation failures, but also for the team’s extended silence about them. Whether or not Meta’s new AI model will ever catch up with OpenAI remains to be seen, though, as their power is determined by the amount of data they are trained on. OpenAI hasn’t said publicly how many parameters GPT-4 is based on, but others have estimated it could be as much as 20 times bigger than Llama, based on about 1.5 trillion parameters.
Meta’s Facebook AI division has developed its own image generation technology that it has named Instance-Conditioned Generative Adversarial Networks (IC-GAN). According to its researchers, unlike standard GAN-based image generators, it can be used to create images that are more diverse than the images contained within their training datasets. This has the potential to reduce the cost of generating, collecting, and storing data for training AI algorithms. It also has a text-to-video generative AI application called Make-A-Video, which it has said it plans to incorporate into its Reels short-form video platform in the future. Conclusively, the announcement of Meta’s AI sandbox for advertisers marks a significant milestone for the company.
How Meta, Google and Snap are embracing generative AI in advertising and beyond
Yakov Livshits
Founder of the DevEducation project
A prolific businessman and investor, and the founder of several large companies in Israel, the USA and the UAE, Yakov’s corporation comprises over 2,000 employees all over the world. He graduated from the University of Oxford in the UK and Technion in Israel, before moving on to study complex systems science at NECSI in the USA. Yakov has a Masters in Software Development.
While Meta is releasing some lightweight generative AI features for advertisers, some ad tech startups are heavily leaning into it. Omneky, which presented at TechCrunch Disrupt last year, used OpenAI’s DALLE-2 and GPT-3 to create ads. Movio, which counts IDG, Sequoia Capital China, and Baidu Ventures as its backers, is using generative AI to create marketing videos as well. Meta today announced an AI Sandbox for advertisers to help them create alternative copies, background generation through text prompts and image cropping for Facebook or Instagram ads.
How to stop Meta from using some of your personal data to train generative AI models – CNBC
How to stop Meta from using some of your personal data to train generative AI models.
Posted: Wed, 30 Aug 2023 07:00:00 GMT [source]
Less than two years ago, Meta – the parent company of Facebook – announced plans to go “all in” on virtual reality and the metaverse. With consumer engagement on those two initiatives so far proving underwhelming, more recently, it has focused efforts on the current hot topic of the technology world – generative AI. “Currently, we’re working with a small group of advertisers in order to quickly gather feedback that we can use to make these products even better. In July, we will begin gradually expanding access to more advertisers with plans to add some of these features into our products later this year,” it said in a blog post.
Originally from South Korea, Danny has produced content for media companies in Korea, Hong Kong and China. He holds a Bachelor of Journalism and Business Marketing from the University of Hong Kong. The company has also been reorganising its AI divisions and spending heavily to whip its infrastructure into shape, after determining early last year that it lacked the hardware and software capacity to support its AI product needs.
AudioCraft works for music, sound, compression, and generation — all in the same place. Because it’s easy to build on and reuse, people who want to build better sound generators, compression algorithms, or music generators can do it all in the same code base and build on top of what others have done. Community Forums bring people together to discuss tough issues, consider hard choices and share recommendations for improving people’s experiences across our apps.
China successfully launches a pilot reusable spacecraft, state media report
I discovered InfoQ’s contributor program earlier this year and have enjoyed it since then! In addition to providing me with a platform to share learning with a global community of software developers, InfoQ’s peer-to-peer review system has significantly improved my writing. If you’re searching for a place to share your software expertise, start contributing to InfoQ. After selecting one of the three options, users will need to pass a security check test.
This tool allows advertisers to create visuals in different aspect ratios, such as social posts, stories, or even short videos like Reels. This simple-sounding automation is a quite needed feature in creator space as a significant amount of time is spent creating these visuals in different dimensions per the requirements. This feature saves the time and efforts of a creator hence, enhancing a creator or an advertiser’s Yakov Livshits overall experience. A Meta spokesperson said that the company’s newest Llama 2 open-source large language model “wasn’t trained on Meta user data, and we have not launched any Generative AI consumer features on our systems yet.” “We’re going to play an important and unique role in the industry in bringing these capabilities to billions of people in new ways that other people aren’t going to do, he added.
Furthermore, the meeting highlighted other ways Meta is utilizing generative AI for internal purposes. This includes an experimental internal-only interface to an ‘agents playground’ powered by LLaMA, which allows Meta employees to have conversations with AI agents and provide feedback to improve systems. Meta is currently working with MetaGen, which provides APIs for Meta’s text and image generation models for experimental use and prototyping.