Sunday, December 8, 2024
HomeCyber Security NewsBlack Hat USA 2023: Complete AI Briefings Roundup

Black Hat USA 2023: Complete AI Briefings Roundup

Published on

SIEM as a Service

The 26th annual BLACK HAT USA is taken place at the Mandalay Bay Convention Centre in Las Vegas from August 5 to August 10, 2023. Four days of intensive cybersecurity training covering all skill levels are scheduled to start off the event. 

More than 100 selected Briefings, dozens of open-source tool demos in Arsenal, a vibrant Business Hall, networking and social activities, and much more will be included in the two-day main conference.

In particular, the Black Hat Briefings bring together experts from all corners of the infosec industry, including the business and government sectors, university institutions, and even underground researchers.

- Advertisement - SIEM as a Service

This year, Black Hat is introducing the “Certified Pentester” program, which includes a full-day practical exam covering pen-testing topics.

Highlights of AI, Machine Learning, and Data Science Briefing Presented

Devising and Detecting Phishing:

This Briefing was presented by Fredrik Heiding, Harvard. Based on a few data points about a user, AI programs that employ vast language models may automatically construct realistic phishing emails. 

They differ from “traditional” phishing emails, which are created by hackers utilizing a few broad guidelines they have learned through experience.

An inductive model that mimics these laws is the V-Triad. In this study, they compare users’ suspicion of emails generated by GPT-4 automatically versus emails generated by the V-triad.

The complete briefing is available here.

Risks of AI Risk Policy

The study was presented by Ram Shankar Siva Kumar, Microsoft; Harvard. Aside from privacy and security issues, the deployment of AI systems also raises the possibility of bias, inequity, and responsible AI failures. 

But what’s surprising is how quickly the ecosystem for AI risk management is growing: twenty-one draft standards and frameworks were published by standards organizations last year alone, and already significant companies and a growing number of startups provide tests to compare AI systems to these standards. 

Once these frameworks are complete, organizations will quickly embrace them, and compliance officers, ML engineers, and security specialists will eventually need to apply them to their AI systems.

The complete briefing is available here.

BTD: Unleashing the Power of Decompilation for x86 Deep Neural Network Executables

The Briefing was presented by Zhibo Liu, Ph.D. Student, Hong Kong University of Science and Technology. Deep learning (DL) models are compiled into executables by DL compilers to fully use low-level hardware primitives due to their widespread use on heterogeneous hardware devices. 

This method enables the low-cost execution of DL calculations on a range of computing platforms, such as CPUs, GPUs, and other hardware accelerators.

They introduce BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables, in this talk.

The complete briefing is available here.

Identifying and Reducing Permission Explosion in AWS

The talk was presented by Pankaj Moolrajani, Lead Security Engineer, Motive. AWS’s cloud infrastructure and services have grown quickly, which has increased the overall amount of permissions and security threats. 

This session suggests an analytical, graph-based method for locating and reducing AWS permission explosion. 

The suggested approach entails gathering information on AWS IAM roles and the permissions linked to them, creating a graph representation of the access links, and searching the graph for groups of roles with a lot of permissions.

The complete briefing is available here.

AI-Assisted Decision Making of Security Review Needs for New Features

The study was presented by Mrityunjay Gautam, Sr. Director, Product Security, Databricks. SDLC has evolved from the decade-old definition by Microsoft to Agile transformation and is finally trying to catch up with cloud development velocity. While the process is well understood in the industry, the execution varies a lot.

How many times has it happened that we discovered a feature with security impact at the time it is getting shipped when a customer raises a concern and it is escalated to the security team, or in the worst case scenario when there is a security incident? 

In this talk,  he presented a novel approach to solving this problem using Deep Learning and NLP technologies.

The complete briefing is available here.

IRonMAN: InterpRetable Incident Inspector Based ON Large-Scale Language Model and Association Mining

This was presented by Sian-Yao Huang, Data Scientist CyCraft Technology. Contextual incident investigation and incident similarity evaluation are essential components of current incident investigation and proactive threat-hunting tactics. 

However, because of their dependability and competitive performance, modern automated systems frequently rely on pattern- and heuristic-based techniques.

These methods cannot link events with contextual information and are vulnerable to evasion via minor modifications, resulting in false warnings. Recent improvements in large-scale language models (LLMs) have yielded promising language representation findings.

The complete briefing is available here.

LLM-Powered Threat Intelligence Program

This briefing was presented by John Miller, Head of Mandiant Intelligence Analysis, Google Cloud. 

As the cyber security sector investigates large language models (LLMs) such as GPT-4, PaLM, LaMDA, and others, organizations are attempting to determine the return on investment these capabilities may provide for security programs. 

The rising accessibility of LLMs for cyber threat intelligence (CTI) activities affects the interplay between the fundamental components underlying any CTI program’s capacity to fulfill its organization’s threat intelligence demands. 

The complete briefing is available here.

Dos and Don’ts of Building Offensive GPTs

This talk was presented by Ariel Herbert-Voss, CEO and Founder, RunSybil. In this session, they show how you can and cannot utilize LLMs like GPT4 to uncover security flaws in apps, and they go over the benefits and drawbacks of doing so.

They speak in detail about how LLMs operate and provide cutting-edge ways for deploying them in offensive situations.

The complete briefing is available here.

The Psychology of Human Exploitation by AI Assistants

This briefing was presented by Matthew Canham, CEO, of Beyond Layer Seven, LLC. The ChatGPT and GPT-4 large language models (LLMs) have taken the globe by storm in the last 60-90 days.

Last year, a Google engineer grew so persuaded that a model was sentient that he broke his nondisclosure agreement. Few people were concerned about the rise of artificial general intelligence (AGI) two years ago. Even conservative academics now advocate for a far shorter period.

The complete briefing is available here.

Uncovering Azure’s Silent Threats

This study was presented by Nitesh Surana, Senior Threat Researcher, Trend Micro. Machine Learning as a Service platform is provided by cloud service providers, allowing businesses to use the advantages of scalability and dependability while undertaking ML operations. 

However, given the widespread deployment of such AI/ML systems throughout the world, the platform’s security posture may often go unreported.

Attendees will learn about the many difficulties encountered in AML, which may extend to other Cloud-based MLaaS systems, during this lecture.

The complete briefing is available here

Perspectives on AI, Hype, and Security

The briefing was presented by Rich Harang, Principal Security Architect, Nvidia. This year saw record amounts of AI hype, and if the press is to be believed, no business is safe, even the security industry. 

Although it is evident that hype-fueled, quick adoption has negative consequences, when article after article asserts that if you don’t employ AI, you’ll be replaced, the attraction might be difficult to resist. 

In addition, there are privacy concerns, planned legislation, legal obstacles, and several other issues. So, what does this all mean for security?

The complete briefing is available here.

Synthetic Trust: Exploiting Biases at Scale

This study was presented by Esty Scheiner, Security Engineer, Invoca. This lecture delves into artificial intelligence-generated voice phishing attacks. 

The talk will seek to reveal the emerging dangers and implications of generative AI on decision-making, personal security, and organizational trust, in addition to digging into the psychological and technological components of such attacks.

They investigate cutting-edge machine learning experiments that produce realistic AI-generated voices. Positive uses for AI voices include enhancing contact center interactions.

The complete briefing is available here.

Cyber Deception Against Autonomous Cyber Attacks

This study was presented by Michael Kouremetis, Principal Adversary Emulation Engineer, MITRE. It was originally assumed to be either impossible or decades away. However, developments in search and neural network classifiers, as well as processing advances, resulted in the invention of the AlphaGo system in 2016, which is capable of surpassing the world’s greatest Go players.

In this presentation, they address a future cyber adversary whose actions and decisions are entirely controlled by an autonomous system (AI).

The complete briefing is available here.

Poisoning Web-Scale Training Datasets is Practical

This briefing was presented by Will Pearce, AI Red Team Lead, Nvidia. Many well-known deep learning models frequently rely on enormous, distributed datasets acquired via the internet. 

Because of licensing and other considerations, these datasets are often maintained as a list of URLs from which training samples may be accessed. Domains, on the other hand, expire and can be acquired by a malicious actor. 

This problem impacts StableDiffusion and Large-Language Models like ChatGPT that are trained on internet-sourced data.

The complete briefing is available here.

The Advent of AI Malware

Kai Greshake, Security Researcher, presented the briefing. They demonstrate that prompt injections are more than just a novelty or annoyance; a whole new breed of malware and manipulation can now run wholly within massive language models like ChatGPT. 

As corporations rush to integrate them with other applications, they will emphasize the importance of properly considering the security of these new technologies. You’ll learn how your future personal assistant might be corrupted and what implications could result.The complete briefing is available here.

Gurubaran
Gurubaran
Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Latest articles

DaMAgeCard Attack – New SD Card Attack Lets Hackers Directly Access System Memory

Security researchers have identified a significant vulnerability dubbed "DaMAgeCard Attack" in the new SD...

Deloitte Denies Breach, Claims Only Single System Affected

Ransomware group Brain Cipher claimed to have breached Deloitte UK and threatened to publish...

Top Five Industries Most Frequently Targeted by Phishing Attacks

Researchers analyzed phishing attacks from Q3 2023 to Q3 2024 and identified the top...

Russian BlueAlpha APT Exploits Cloudflare Tunnels to Distribute Custom Malware

BlueAlpha, a Russian state-sponsored group, is actively targeting Ukrainian individuals and organizations by using...

API Security Webinar

72 Hours to Audit-Ready API Security

APIs present a unique challenge in this landscape, as risk assessment and mitigation are often hindered by incomplete API inventories and insufficient documentation.

Join Vivek Gopalan, VP of Products at Indusface, in this insightful webinar as he unveils a practical framework for discovering, assessing, and addressing open API vulnerabilities within just 72 hours.

Discussion points

API Discovery: Techniques to identify and map your public APIs comprehensively.
Vulnerability Scanning: Best practices for API vulnerability analysis and penetration testing.
Clean Reporting: Steps to generate a clean, audit-ready vulnerability report within 72 hours.

More like this

DaMAgeCard Attack – New SD Card Attack Lets Hackers Directly Access System Memory

Security researchers have identified a significant vulnerability dubbed "DaMAgeCard Attack" in the new SD...

Deloitte Denies Breach, Claims Only Single System Affected

Ransomware group Brain Cipher claimed to have breached Deloitte UK and threatened to publish...

Top Five Industries Most Frequently Targeted by Phishing Attacks

Researchers analyzed phishing attacks from Q3 2023 to Q3 2024 and identified the top...