Eskenzi PR ad banner Eskenzi PR ad banner
  • About Us
Friday, 9 June, 2023
IT Security Guru
Eskenzi PR banner
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us
No Result
View All Result
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us
No Result
View All Result
IT Security Guru
No Result
View All Result

Chatbots Say Plenty About New Threats to Data

by The Gurus
June 17, 2020
in This Week's Gurus
Share on FacebookShare on Twitter

By Amina Bashir and Mike Mimoso, Flashpoint

Chatbots are becoming a useful customer interaction and support tool for businesses. These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call centre personnel, and they’re popping up everywhere.

As with most other innovations, threat actors have found a use for them too.

A number of recent security incidents have involved the abuse of a chatbot to steal personal or payment card information from customers, or to post offensive messages in a business’s channel threatening its reputation. There is potential for worse with the possibility of attackers finding inroads with chatbots, either by exploiting vulnerabilities in the code to sit in a man-in-the-middle position and steal data from an interaction as it traverses the wire or sending the user links to exploits in order to access a backend database where information is stored. Attackers may also mimic chatbots, impersonating an existing business’s messaging to interact with customers directly and steal personal information that way.

It’s an array of risks and threats that could be hidden in an innocuous communicational channel and are challenging to mitigate.

Flashpoint analysts believe as businesses integrate chatbots into their platforms, threat actors will continue to leverage chatbots in malicious campaigns to target individuals and businesses across multiple industries. Moreover, threat actors will likely evolve the methods used to leverage chatbots in attacks as businesses move to enhance chatbot security.

Few Chatbot Attacks Made Public

Further complicating matters is that many attacks are going unreported. The attacks that are made public provide interesting insight into how attackers are leveraging chatbots. 

In June, Ticketmaster UK disclosed a breach of personal and payment card data belonging to 40,000 international customers. The threat actor group, identified as Magecart, targeted JavaScript built by service provider Inbenta for Ticketmaster UK’s chatbot. Inbenta said in a statement that a piece of custom JavaScript designed to collect personal information and payment card data for the Ticketmaster chatbot was exploited; the code had been supplied more than nine months earlier. It was disabled immediately upon the disclosure.

Microsoft and Tinder have also experienced issues with chatbots. In Microsoft’s case, the release of its AI chatbot Tay in 2016 was reportedly commandeered by threat actors who led it to spout anti-Semitic and racist abuse in an attack methodology classified as “pollution in communication channels.”

On the popular dating app Tinder, cybercriminals used a chatbot to conduct fraudulent activity by impersonating a female who asked victims to enter their payment card information to become verified on the platform.

Mitigations and Assessment

Awareness about potential risks related to chatbots isn’t high. For their part too, attackers likely hadn’t set out to exploit chatbot vulnerabilities, but in targeting the supply chain or scanning for bugs in code, found themselves an available and relatively new attack vector with direct access to users and their information. In addition to man-in-the-middle attacks where chatbots can be mimicked, attackers can use them in phishing and other social engineering scams. Attackers can also use chatbots to provide users with links redirecting them to malicious domains, steal information, or access protected networks.

Since most of these attacks can be essentially attacks against software, tried and tested security hygiene goes a long way as a mitigation. This entails starting with requiring multi-factor authentication to verify a user’s identity before any personal or payment card data is exchanged through a chatbot.

Monitoring for and deploying regular software updates and security patches is imperative. Organisations should also consider encrypting conversations between the user and the chatbot, as this is also essential to warding off the loss of personal data.

Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message. This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker. Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.

Finally, the rise in chatbot-related attacks should also reinforce the need for continuous end-user education to counter social engineering.

FacebookTweetLinkedIn
ShareTweet
Previous Post

32,000 smart homes and businesses at risk of leaking data

Next Post

FCA lays out new rules for banks on reporting operational and security incidents to customers

Recent News

Ransomware

Clop Ransomware Gang Extorts Household Names including BBC, British Airways and Boots

June 9, 2023
code

Developers Kept Away From Coding, Estimated £10.4bn a Year Wasted

June 8, 2023
large open office, bright.

Employees Feel 10 Times Calmer in an Environmentally Friendly Office Space

June 7, 2023
Blue Logo OUTPOST24

Outpost24 Acquires EASM Provider Sweepatic

June 7, 2023

The IT Security Guru offers a daily news digest of all the best breaking IT security news stories first thing in the morning! Rather than you having to trawl through all the news feeds to find out what’s cooking, you can quickly get everything you need from this site!

Our Address: 10 London Mews, London, W2 1HY

Follow Us

© 2015 - 2019 IT Security Guru - Website Managed by Calm Logic

  • About Us
No Result
View All Result
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us

© 2015 - 2019 IT Security Guru - Website Managed by Calm Logic

This site uses functional cookies and external scripts to improve your experience.

Privacy settings

Privacy Settings / PENDING

This site uses functional cookies and external scripts to improve your experience. Which cookies and scripts are used and how they impact your visit is specified on the left. You may change your settings at any time. Your choices will not impact your visit.

NOTE: These settings will only apply to the browser and device you are currently using.

GDPR Compliance

Powered by Cookie Information