Home Blockchain Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

0
Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

98% of deepfakes are porn

AI image generation has become outrageously good in the past 12 months … and some people (mostly men) are increasingly using the tech to create homemade deepfake porn of people they fantasize about using pics culled from social media.

The subjects hate it, of course, and the practice has been banned in the United Kingdom. However, there is no federal law that outlaws creating deepfakes without consent in the United States.

Nudify
The Nudify app (Nudify.online)

Face-swapping mobile apps like Reface make it simple to graft a picture of someone’s face onto existing porn images and videos. AI tools like DeepNude and Nudeify create a realistic rendering of what the AI tool thinks someone looks like nude. The NSFW AI art generator can even crank out Anime porn deepfakes for $9.99 a month.

According to social network analytics company Graphika, there were 24 million visits to this genre of websites in September alone. “You can create something that actually looks realistic,” analyst Santiago Lakatos explains.

Such apps and sites are mainly advertised on social media platforms, which are slowly starting to take action, too. Reddit has a prohibition on nonconsensual sharing of faked explicit images and has banned several domains, while TikTok and Meta have banned searches for keywords relating to “undress.”

Around 98% of all deepfake vids are porn, according to a report by Home Security Heroes. We can’t show you any of them, so here’s one of Biden, Boris Johnson and Macro krumping.

The technology and celebrity-obsessed South Koreans lead the trend, accounting for 53% of all deepfake porn on the web.

K-pop singers (58%) and actresses from South Korea (33%) make up the overwhelming number of targets, with one singer the subject of 1,595 videos that have been viewed more than 5.5 million times.

A survey of 1,522 American men found that while 68% would be shocked and outraged by the invasion of privacy and consent involved if the deepfake was someone they knew, actual consumers of deepfake porn aren’t much bothered. Around three-quarters didn’t feel guilty about it at all.

Kpop
Home Security Heroes’ top ten list consists mostly of K-pop stars.

Grok is no edgelord

Grok turns out not to be the truth-spewing edgelord chatbot Elon Musk had hoped.

X promised Grok would “answer spicy questions that are rejected by most other AI systems,” but in the field, Grok has been answering questions just like ChatGPT does.

It thinks Musk’s favorite phrase, the “woke mind virus,” “is a load of BS”; it says that trans women are women (or it did until conservative account Ian Miles Cheong apparently beat it into submission); it supports Joe Biden for president due to his commitment to social justice, and it says it’s not too fond of Christians.

Read also


Features

Fan tokens: Day trading your favorite sports team


Features

How to prepare for the end of the bull run, Part 1: Timing

A political compass test revealed that Grok is in the left-libertarian quadrant of the political compass, slightly further out than ChatGPT. Musk says he’ll be “taking immediate action to shift Grok closer to politically neutral.”

ChatGPT gets lazier

In recent days, more and more users have reported that instead of carrying out a task, ChatGPT seems disinterested or gives a partial answer and tells users to finish the job themselves. To me, that’s a sure sign that human-level artificial general intelligence has been achieved.

Many suspect OpenAI has nerfed it to reduce the phenomenal cost of the system, but OpenAI says that’s not the case.

In the meantime, users have resorted to bribing the AI with tips for better answers and impressing upon it in the prompt how earth-shattering crucial the best possible answer is.

Google’s Gemini video was unreal

Duck
If it quacks like a duck, it’s fake. (Google)

As we now know, Google’s “mind-blowing” and “unreal” product video for Gemini Ultra was faked — but is the deception significant?

The video shows a man having a natural-sounding conversation with the AI, which recognizes he is in the process of drawing a duck. Gemini Ultra can also work out which cup the ball is under during a shell game and figure out when someone is playing rock paper scissors.

Rock paper
Guess the game. (Google)

But in reality, there was no video and no vocalized conversation. The AI was prompted via text and simply shown still images.

However, Google says the prompts and outputs were real, and Gemini can actually recognize a drawing of a duck.

Duck recognition has long eluded scientists, so this looked like a huge breakthrough … although, as it happens, ChatGPT can recognize ducks, too.

Oriol Vinyals, the vice president of research for Google DeepMind, said the video was merely a serving suggestion.

“The video illustrates what the multimodal user experiences built with Gemini could look like. We made it to inspire developers.”

Test results for Gemini show it’s just beating GPT-4 on seven out of eight benchmarks — which sounds good until you realize that GPT-4 was completed a year ago, giving OpenAI a 12-month head start on GPT-5.

Fetch.AI = Google of AI Agents?

AI Eye caught up with Humayun Sheikh this week — the founder of Fetch.ai and a former commercial director of DeepMind (now Google DeepMind). Sheikh says the company was “doing GPT-like things 13 years ago,” meaning Google deliberately decided not to release an LLM until ChatGPT forced its hand.

“I was quite surprised it came out quite late, and they gave OpenAI a chance. But I think they were ready a while back,” he says.

“The problem is that if you start using this technology, and you let it loose, you start cannibalizing your own business,” he adds, speculating that the AI might undercut the revenue from Google’s lucrative search business.

That said, Sheikh believes Google is still in a winning position in the AI arms race.

“Google has maps, Google has advertising, Google has businesses and all the reach you need to integrate with a suite of applications,” he says. “So I think OpenAI has a problem, and OpenAI will have to drastically change to be viable.”

Read also


Features

Meet Dmitry: Co-founder of Ethereum’s creator Vitalik Buterin


Features

Guide to real-life crypto OGs you’d meet at a party (Part 2)

Fetch.AI is in the business of helping users create AI agents, with 100,000 up and running so far. The bots can help with travel bookings, EV charging and looking after IoT devices.

You can give an AI agent a goal, and it will create a bunch of subtasks and autonomously carry them out until the goal is achieved. Sheikh says blockchain is a natural fit to coordinate agents and to record their performance.

“If you want to interact with multiple entities, if you want to interact with multiple pieces of code, if you want to interact with multiple machine learning AI algorithms, you need a new framework. You cannot run that on an old Web 2.0 framework.”

“The orchestration of these tasks needs to happen somewhere,” he says. “And these micro-tasks need some monetization, which the blockchain provides.”

Peaq blockchain, Fetch.ai and Bosch have just unveiled the Bosch XDK110 Rapid Prototyping Kit, which utilizes a small sensor that can capture data on things like pollution levels, the weather or seismic activity, and then feed the data back to multiple decentralized physical infrastructure networks (DePINs) in exchange for tokens.

So, for example, you could ask an AI agent to check the tire pressure sensors in your car and to check sensors recording the weather conditions, and work out if the tire pressure is too low or too high or if you need new tires. If you do, it can book you in at the tire place.

Or you could ask your AI agent to perform sentiment analysis on a particular stock and buy it if the analysis is favorable.

Sheikh says the plan is for Fetch.AI to become the search engine for agents, “and we’ll also be a self-assembly engine, which puts all this stuff together without you having to code it or integrate it.”

All Killer, No Filler AI News

— Meta dropped 20 new AI features for its social media platforms to jazz up search, ads, and business messaging. The free image generator is the one people are excited about.  

— Boffins from Alibaba scraped a bunch of TikTok influencer videos, and now you can generate pics of people dancing from a still photo and some pose guidance.

— Professor Ethan Mollick has compared and contrasted all the LLMs and says the best one to use is GPT-4. Research suggests it boosts real work performance, generates better ideas than most humans, seems smarter than the competition and has the most features.

— McDonald’s is building an AI chatbot called Ask Pickles that’s trained on data from 50,000 restaurants. Franchisees and staff can now interrogate the AI to better understand exactly how to do everything in a more McDonaldsy way.

— Amazon currently has around three-quarters of a million robots working alongside 1.5 million employees.

— Grok user Jax Winterbourne was taken aback when the AI refused to answer a question and cited OpenAI policy as the rationale. While some users believe Grok has been revealed as nothing more than a front end for ChatGPT, Xai said it’s picked up some of ChatGPT’s outputs in its training data. 

Tweet of the week

A new perspective just dropped: “The only thing LLMs do is hallucinate, the trick is in getting the hallucinations to align with reality.”

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here