Warning issued to 2,500,000,000 Gmail users over ‘devastating scam’ which allows hackers to steal banking and sensitive data

2.5 billion Gmail users have been warned over a ‘devastating scam’ that is said to allow hackers to steal banking and sensitive data.

While we can all everything we can to ensure our devices are as secure as they possibly can be, some things are sometimes just out of our hands.

Cybercriminals are seemingly using all the right tricks to take advantage of innocent web users and recently, they have been targeting Gmail customers, which sees them use AI to create realistic phone calls and send out seemingly legitimate emails.

Following these hyper realistic phone calls, an email is then sent which prompts users to a website that seemingly looks identical to that of the Google website. But the link is very much scam.

And if that link is pressed, then the hackers have the ability to commit identity, financial and information theft.

Spencer Starkey, a vice-president at SonicWall, has stated companies such as Google need to be on their toes to ensure their users are safe.

A warning has been issued to billions of Gmail users about a sophisticated AI scam (Klaudia Radecka/NurPhoto via Getty Images)A warning has been issued to billions of Gmail users about a sophisticated AI scam (Klaudia Radecka/NurPhoto via Getty Images)

A warning has been issued to billions of Gmail users about a sophisticated AI scam (Klaudia Radecka/NurPhoto via Getty Images)

He said: “Cybercriminals are constantly developing new tactics, techniques, and procedures to exploit vulnerabilities and bypass security controls, and companies must be able to quickly adapt and respond to these threats.

“This requires a proactive and flexible approach to cybersecurity, which includes regular security assessments, threat intelligence, vulnerability management, and incident response planning.”

Victim Sam Mitrovic recalled his ordeal, saying to the Metro: “The scams are getting increasingly sophisticated, more convincing and are deployed at ever larger scale.

“People are busy and this scam sounded and looked legitimate enough that I would give them an A for their effort. Many people are likely to fall for it.”

Back in May 2024, the FBI issued a warning about the increasing threat of cybercriminals using AI in their scams to make it difficult for users to spot.

Robert Tripp, from the FBI, said at the time: “Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike.

Hackers are finding new and more advanced ways to commit crimes online, according to experts (Getty Stock Photo)Hackers are finding new and more advanced ways to commit crimes online, according to experts (Getty Stock Photo)

Hackers are finding new and more advanced ways to commit crimes online, according to experts (Getty Stock Photo)

“These sophisticated tactics can result in devastating financial losses, reputational damage, and compromise of sensitive data.”

The FBI also warned about staying vigilant to ensure you don’t become a victim of these scams.

“Be aware of urgent messages asking for money or credentials. Businesses should explore various technical solutions to reduce the number of phishing and social engineering emails and text messages that make their way to their employees,” the website states.

“Additionally, businesses should combine this technology with regular employee education and employees about the dangers of phishing and social engineering attacks and the importance of verifying the authenticity of digital communications, especially those requesting sensitive information or financial transactions.”

Adding a multi-factor authentication is also a good idea, according to the agency to ensure you’re as best protected as possible.

Zach Lata, who is the founder of Hack Club and nearly became a victim to the hack, revealed how cybercriminals used a pretty simple method in an attempt to steal sensitive data.

 Experts issue urgent warning to turn off default iPhone setting that could give hackers your personal informationExperts issue urgent warning to turn off default iPhone setting that could give hackers your personal information

 

Experts issue urgent warning to turn off default iPhone setting that could give hackers your personal information

Apple’s iOS setting might seem convenient, but it could leave you vulnerable

The National Security Agency (NSA) has warned iPhone users to turn off one default setting that could put them at risk of being targeted by hackers.

No matter where we are in the world, we usually expect to be connected. We want to receive messages, flick through social media and Google who that celebrity from that one movie was, and the best way to do that is with a Wi-Fi connection.

Even if your data is struggling, Wi-Fi can ensure you have the world at your fingertips – but connecting to certain Wi-Fi networks comes with its risks.

We all want to stay connected when we're out and about (Getty Stock Image)We all want to stay connected when we're out and about (Getty Stock Image)

We all want to stay connected when we’re out and about (Getty Stock Image)

It’s not too surprising to learn that public networks aren’t as secure as private ones, but what you might not know is that iPhones have a feature that can hook you up to public Wi-Fi even if you don’t actively seek to join it.

Known as ‘auto-join’, iOS will start by searching for the most preferred network, followed by private networks, then public networks, Apple explains.

Going into more detail on public networks, Apple said they’re those which are ‘designed for general access in public places, such as a hotel, airport or coffee shop’.

This might sound convenient if you’re in desperate need of an internet connection, but being automatically connected could mean that you end up in the hands of a hacker.

The NSA has issued warnings to explain that hackers might set up networks to look like something safe, for example a restaurant or hotel, then ’employ malicious access points redirecting to malicious websites, injecting malicious proxies, and eavesdropping on network traffic’.

Hackers are able to intercept public Wi-Fi networks (Getty Stock Image)Hackers are able to intercept public Wi-Fi networks (Getty Stock Image)

Hackers are able to intercept public Wi-Fi networks (Getty Stock Image)

The agency described these networks as an ‘evil twin’, saying they ‘mimic the nearby expected public Wi-Fi, resulting in that actor having access to all data sent over the network’.

“The risk is not merely theoretical,” the NSA says. “Malicious techniques are publicly known and in use.”

If hackers are able to gain access to your phone, they also have the ability to steal identifying information and data.

How to disable Wi-Fi auto-connect

To avoid this risk, iPhone users are advised to disable the auto-connect feature by going to Settings > Wi-Fi.

There, you can find ‘Ask to Join Networks’, and click ‘Off’ or ‘Ask’ . When it comes to ‘Auto-Join Hotspot’, you should select ‘Never’.

“If users choose to connect to public Wi-Fi, they must take precautions,” the NSA says. “Data sent over public Wi-Fi – especially open public Wi-Fi that does not require a password to access – is vulnerable to theft or manipulation. Even if a public Wi-Fi network requires a password, it might not encrypt traffic going over it.”

To help keep yourself extra safe, keep an eye out for padlocks on your browser’s URL – this will ensure your web browsing is encrypted – and avoid inputting any of your credentials onto popups or familiar websites that appear unexpectedly.

Former Google worker and tech pioneer issues alarming AI warning that will affect millionsFormer Google worker and tech pioneer issues alarming AI warning that will affect millions
  1. Home
  2. Technology
  3. News

Former Google worker and tech pioneer issues alarming AI warning that will affect millions

The AI ‘godfather’ has issued a warning regarding the rise of AI.

The rise of artificial intelligence is an interesting and hotly debated one in the tech industry.

While some people worry the advancements in technology that allow AI to thrive may result in their job being made redundant – or in a worse case scenario, taking over the world akin to SkyNet in the Terminator series – while others look at the possibility such a rise could bring to society.

Now, the ‘Godfather of AI’, who you’d think would know a thing or two about AI, has issued an alarming warning surrounding its rise.

Professor Geoffrey Hinton left Google last year after he admitted to regretting regarding his work in the field of AI.

The tech pioneer is now warning about what the future may hold for AI, and has been talking about the possibility that it could lead to job losses for millions.

Hinton has suggested there needs to be a universal basic income to mitigate against the impacts of AI’s rise within the job market.

Professor Geoffrey Hinton has issued a stark warning. (GEOFF ROBINS/AFP via Getty Images)Professor Geoffrey Hinton has issued a stark warning. (GEOFF ROBINS/AFP via Getty Images)

Professor Geoffrey Hinton has issued a stark warning. (GEOFF ROBINS/AFP via Getty Images)

Speaking to BBC’s Newsnight, Hinton said: “I certainly believe in a universal basic income, but I don’t think that’s enough because a lot of people get their self-respect from the jobs they do.

“If you pay everybody a universal basic income, that solves the problem of them starving and not being able to pay the rent but that doesn’t solve the self-respect problem.”

He added that he’d ‘consulted Downing Street’ and ‘advised them that universal basic income was a good idea’.

The former Google tech developer worryingly suggests that many ‘mundane jobs’ will disappear because AI will simply be able to do them.

He went on: “I am very worried about AI taking over lots of mundane jobs. That should be a good thing.

Could this be a possibility in the future? (Getty Stock Photo)Could this be a possibility in the future? (Getty Stock Photo)

Could this be a possibility in the future? (Getty Stock Photo)

“It’s going to lead to a big increase in productivity, which leads to a big increase in wealth, and if that wealth was equally distributed that would be great, but it’s not going to be.

“In the systems we live in, that wealth is going to go to the rich and not to the people whose jobs get lost, and that’s going to be very bad for society, I believe. It’s going to increase the gap between rich and poor, which increases the chances of right-wing populists getting elected.”

He guessed that ‘between five and 20 years from now’, there’ll be a ‘probability of half that we’ll have to confront the problem of AI trying to take over’ – which he worried would lead to a ‘extinction-level threat’ due to humans having ‘created a form of intelligence that is just better than biological intelligence… That’s very worrying for us’.

Not everyone is as concerned with the rise of AI though, with Bill Gates suggesting earlier this year that the technology has its limitations.

 Expert shares warning to everyone who keeps an Amazon Alexa Echo in their bedrooms Expert shares warning to everyone who keeps an Amazon Alexa Echo in their bedrooms
  1. Home
  2. Technology
  3. Amazon

Expert shares warning to everyone who keeps an Amazon Alexa Echo in their bedrooms

Smart speakers are a common feature in households now, but it’s worth being mindful of their presence

If you like having your smart speaker play you some music or tell you what the weather’s going to be like before you even get out of bed, then you might want to pay attention to this expert’s warning.

Smart speakers and AI are a common part of our everyday lives now, so much so that many of us can be caught off guard by old fashioned acts like having to physically turn down the volume on a speaker yourself, or having to Google every little fact rather than just having the disembodied voice in the corner fill you in.

The inventions have definitely introduced a new way of approaching everyday tasks, but now experts have warned that people using speakers like Amazon’s Alexa should be wary about exactly where they are placing the speaker.

Smart speakers are a common part of households nowadays (Gado/Getty Images)Smart speakers are a common part of households nowadays (Gado/Getty Images)

Smart speakers are a common part of households nowadays (Gado/Getty Images)

Experts have warned specifically against placing the device in bedrooms, for fear of users themselves becoming the ‘entertainment.’

Tech expert Dr Hannah Fry is among those who voiced concerns when she spoke to the Daily Mail back in 2019, the same year whistleblowers from Amazon suggested workers might have been tapping into conversations in order to check that the devices were working properly.

She said: “This technology is activated by a trigger word [such as ‘Alexa’] but it keeps recording for a short period afterwards. People accept that, but we should all spend more time thinking about what it means for us.

“There are people who are very senior in the tech world who will not have so much as a smartphone in their bedroom… If a company is offering you a device with an internet-connected microphone at a low price, you have to think about that very carefully…

“I have both an Alexa and a Google voice-activated device and I regularly turn them both off. People really must set their own limits.”

Smart speakers are activated by a chosen word (Smith Collection/Gado/Getty Images)Smart speakers are activated by a chosen word (Smith Collection/Gado/Getty Images)

Smart speakers are activated by a chosen word (Smith Collection/Gado/Getty Images)

In response to concerns about smart speakers, a spokesperson for Amazon told LADbible Group: “Echo devices are designed to record audio only after the device detects your chosen wake word (Alexa, Amazon, Echo, Ziggy or Computer).

“You will always know when Alexa is sending your request to the cloud because a blue light indicator will appear on your Echo device. We manually review only a small fraction of one percent of Alexa requests to help improve Alexa.

“Access to these review tools is only granted to a limited number of employees who require them to improve the service.

“Our review process does not associate voice recordings with any customer identifiable information.”

 Experts issue warning over fears of ‘haunting’ new AI tool that brings deceased loved ones 'back to life'Experts issue warning over fears of ‘haunting’ new AI tool that brings deceased loved ones 'back to life'

  1. Home
  2. Technology

Experts issue warning over fears of ‘haunting’ new AI tool that brings deceased loved ones ‘back to life’

University researchers have issued an important warning over emerging AI tech that could allow people to ‘speak to the dead’

Experts have issued an urgent warning over an AI tool which ‘brings back the dead’, branding the tech an ‘ethical minefield’ which could have ‘devastating’ consequences.

Technology continues to move at a rapid pace and the work being put into artificial intelligence tools is forever taking new leaps forward.

However, there are not always considerations for the moral or ethical issues regarding emerging tech.

Researchers at the University of Cambridge have begun warning about the future of some AI tools, most disturbingly regarding a tool that could allow users to hold text and voice conversations with lost loved ones.

In a paper entitled ‘Digital afterlife’: call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones released on May 8, researchers warn about the wider issues with, well, ‘talking’ to the dead.

“‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind,” the paper explains.

“Some companies are already offering these services, providing an entirely new type of ‘postmortem presence’.”

And if you think this sounds like something straight out of the Black Mirror TV show… that’s because it basically is.

An entire episode was dedicated to this very concept in the second season, and it highlighted the dangers and potential psychological harm that a person can go through using this kind of tool.

Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence, emphasized why tools like this can prove dangerous and advised caution within the industry.

Researchers at the University of Cambridge have begun warning about the future of some AI tools.(Getty Stock Image)Researchers at the University of Cambridge have begun warning about the future of some AI tools.(Getty Stock Image)

Researchers at the University of Cambridge have begun warning about the future of some AI tools.(Getty Stock Image)

She said: “This area of AI is an ethical minefield. It’s important to prioritize the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.

“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Co author Dr Tomasz Hollanek reiterated these points and suggested it would be crucial to implement a system that ensures the individual can eventually cut ties with the digital person, possibly holding a funeral.

And if you think this sounds like something straight out of the Black Mirror TV show... that's because it basically is. (Getty Stock Image)And if you think this sounds like something straight out of the Black Mirror TV show... that's because it basically is. (Getty Stock Image)

And if you think this sounds like something straight out of the Black Mirror TV show… that’s because it basically is. (Getty Stock Image)

“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” he said.

“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulation.

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”

Leave a Reply

Your email address will not be published. Required fields are marked *