Technical News Office,ChatGPT, OpenAI’s popular AI chatbot, has become so advanced today that it can perform many tasks in minutes. From there, you can find out everything in a jiffy. However, some cybercriminals misuse it, for example by writing malware and providing criminal advice. At the same time, new research has revealed something shocking. Research has shown that cybercriminals commit financial scams using ChatGPT’s GPT-4-based real-time voice API.
This gap in ChatGPT
According to researchers at the University of Illinois at Urbana-Champaign (UIUC), tools such as ChatGPT lack security measures, making them vulnerable to cybercrimes such as wire transfers, cryptocurrency transfers, gift card scams and user credential theft. that AI agents like ChatGPT could be used to trick people into transferring money by copying real people and using real websites like Bank of America. According to the study, tests of common scams recorded success rates ranging from 20% to 60%, which involved 26 browser actions and took about three minutes.
Easy to steal Gmail and Instagram logins?
However, the study also showed that bank transfers had a higher rate of failed attempts due to specific navigation, while the success rates for credential theft on Gmail and Instagram were 60%. and 40%. The researchers also said that the cost of this type of scam is quite low, averaging $0.75, or about Rs. 63, while the cost of attempts such as wire transfer is $2.51, or approximately Rs.211.
What did OpenAI say about this?
At the same time, OpenAI, which developed ChatGPT, while giving its opinion on this situation, said that it is continuously strengthening ChatGPT in order to prevent it from being used for bad purposes and its creativity also remains intact . OpenAI sees the UIUC research as a useful step toward increasing AI security and says this type of research helps them prevent malicious use.
Share this story