ChatGPT causes increase in cyber security threats
Hyderabad: The tremendous growth of ChatGPT, a free chatbot powered by artificial intelligence, has attracted widespread attention. Developed by OpenAI, a non-profit organization committed to advancing friendly AI, this advanced machine learning model boasts the ability to answer any question. However, as ChatGPT continues to gain popularity, so too is concern about the associated risks. Cyber criminals have increasingly taken advantage of the opportunity by creating near-identical replicas of the official site or app to distribute malicious content. Beyond mere imitation, the real danger lies in the possibility of spear phishing attacks enabled by this chatbot. These targeted cyber attacks take advantage of personal information users unknowingly share on social media and in their daily online activities. Growing Threat: Spear Phishing Attacks In the hands of an attacker, ChatGPT turns into a powerful tool for spear phishing attacks. These attacks are carefully designed to exploit information individuals unwittingly reveal through their social media profiles and browsing habits. Cybercriminals use AI to create deceptive content specifically designed to deceive their intended victims. To counter this worrying trend, Italian cyber security firm Ermes ā Cyber Security has developed an effective AI system. Recognizing the increasing reliance on third-party AI-based services, Erms aims to provide a secure solution that filters and blocks the sharing of sensitive information such as emails, passwords, and financial data. Threat of Business Email Compromise (BEC) A particularly worrisome threat is the exploitation of ChatGPT for Business Email Compromise (BEC) attacks. Cybercriminals use templates to craft deceptive emails, tricking recipients into revealing sensitive information. With the help of ChatGPT, hackers can create unique content for each email, making these attacks difficult to detect and distinguish from legitimate correspondence. The flexibility of ChatGPT enables attackers to apply various modifications to their signals, increasing the chances of success. This scenario raises serious concerns about the potential misuse of advanced AI technologies in the field of cybersecurity, urging users and organizations to remain alert to emerging threats.