FOLLOW KIM BELLARD
If you’ve been following artificial intelligence (AI) lately – and you should be – you may have already started thinking about how it will change the world. In terms of its potential impact on society, it’s been compared to the advent of the Internet, the invention of the printing press, even the first use of the wheel. Maybe you’ve played with it, maybe you’re savvy enough to worry about it What could it mean for your work?But there’s one thing you shouldn’t overlook: like any technology, it can be used for both good and evil.
If you think cyberattacks/cybercriminals are bad when done by humans or simple bots, wait and see what AI can do. And, like Ryan Health wrote in axis“AI can also weaponize modern medicine against the very people it sets out to cure.”
We may need DarkBERT and the Dark Web to help protect us.
ONE new research showed how AI can create cheaper, much more effective online phishing campaigns, and the author notes that campaigns can also use “persuasive voice transcripts of individuals.” “By engaging in natural language dialogue with targets, AI agents can lull victims into a false sense of trust and familiarity before launching attacks,” he notes. labour.”
It’s worse than that. A recent article in washington articles warning:
That’s just the beginning, experts, executives and government officials fear, as attackers use artificial intelligence to write software that can break into corporate networks in ways novelty, change the interface and functionality to defeat detection, and pass data back through seemingly normal processes.
The outdated architecture of the internet’s major protocols, the relentless layering of flawed programs, and decades of economic and regulatory failures have created armies of criminals with nothing to fear. for businesses that don’t even know how many machines they have, let alone running outdated programs.
Health care should also be concerned. World Health Organization (WHO) just call to exercise caution when using AI in healthcare, noting that, among other things, AI can “generate responses that appear authoritative and reasonable to end users; however, these responses may be completely inaccurate or critically flawed…generating and disseminating highly persuasive misinformation in the form of text, audio, or video content that makes it difficult for the public to distinguish distinguishable from trustworthy health content.”
It will get worse before it gets better; the WaPo The article warns: “AI will provide more benefits to attackers in the near future.” This may be where solutions like DarkBERT come in.
Now, I don’t know much about the Dark Web. I know vaguely that it exists and that people often (but not exclusively) use it for bad things. I have never used Tor, commonly used software for anonymity that works on the Dark Web. But some clever researchers in Korea decided to create a Large Language Model (LLM) trained on data from the Dark Web – fire-fighting, so to speak. This is what they call DarkBERT.
The researchers went this route because: “Recent research has suggested that there are
there is a clear difference in the language used in the Dark Web compared to the language of the Surface Web.” LLMs trained on data from the Surface Web will miss or fail to understand much of what is happening on the Dark Web, which is what some Dark Web users are hoping for.
I won’t try to explain how they get the data or train DarkBERT; What matters is their conclusion: “Our assessment shows that DarkBERT outperforms current language models and can serve as a valuable resource for future research on the Dark Web. ”
They demonstrated the effectiveness of DarkBERT against three potential problems of the Dark Web:
- Ransomware leak site detection: identifies “the sale or publication of private, confidential data of organizations leaked by ransomware groups.”
- Detecting interesting topics: “automatically detects potentially malicious software
- Threat keyword inference: generates “a set of keywords that are semantically related to threats and drug trafficking on the Dark Web.”
On each task, DarkBERT was more efficient than comparison models.
The researchers have yet to release DarkBERT more widely, and the paper has yet to be peer-reviewed. They know they still have a lot of work to do: “In the future, we also plan to improve the performance of our pre-trained language models specifically for the Dark Web domain by using the newer architectures and collect additional data to enable the construction of a multilingual language mode.”
Still, What they show is impressive. geeks for geeks drunk:
DarkBERT emerges as a beacon of hope in the relentless battle against online malice. By harnessing the power of natural language processing and delving into the mysterious world of the dark web, this formidable AI model delivers unprecedented insights, empowering experts cybersecurity to fight cybercrime with greater efficiency.
It can’t come soon enough. New York Times report There has been a wave of entrepreneurs coming up with solutions to try to identify AI-generated content – text, audio, images, or video – that could be used for deep spoofing or other malicious activities. other nefarious purposes. But the article notes that it’s like anti-virus protection; As AI’s defenses get better, so will the content-generating AI. “The authenticity of the content will become a big problem for the whole society,” admitted one such entrepreneur.
when even Sam Altman And other AI leaders is calling for AI oversight, you know this is something we should all be worried about. As WHO warned, “there is concern that the caution normally exercised with respect to any new technology is not being exercised consistently with LLM”. Our enthusiasm for the potential of AI is far outweighing our ability to warrant our wisdom in using them.
Some experts recently called an Intergovernmental Panel on Information Technology – including but not limited to AI – to “consolidate and summarize the state of knowledge about the potential social impacts of technical communication technologies number,” but this seems like a necessary but hardly sufficient step.
Likewise, WHO has proposed their own guidelines for Ethics and Governance of Artificial Intelligence for Health. Regardless of the watchdogs, legislative requirements, or other safeguards we intend to put in place, they are already too late.
In any case, the AI from the Dark Web is capable of bypassing and attempting to circumvent any laws, regulations, or ethical principles that society may agree to, whenever possible. So I’m cheering for solutions like DarkBERT that can fight any AI that emerges from it.
Kim is a former director of e-marketing at a Blues grand scheme, editor of The Late & Lamentations Tincture.ioand now a regular THCB contributor