I predict AI is going to give me more steak. Yes, you read that correctly, and no, I have not lost my mind. My favorite way to describe my job to people is by starting off with a plate. Imagine, for a moment, all the tedious, repetitive, boring, and maybe even a bit soul-crushing tasks you do at work as broccoli. I know many of us in our youth would describe broccoli this way. (in reality, it's the best cheese delivery mechanism known to man, but you get the point). Now, imagine all the parts about your job you generally look forward to, that excite you, that light your passion for your field, and make you feel like you're making a difference as the steak. Now, all the responsibilities and tasks you do at work are set on a plate. My plate is about 60-70% broccoli and about 30-40% steak. I work in cybersecurity. My official title is security engineer, but I would throw something like "Compliance reporter/ measurer" in there if I could.
I imagine the exact ratio differs widely from person to person and depends hugely upon where exactly in the field of cybersecurity they occupy. While the field itself employs a relatively small number of people at a global scale, roughly 5.5 million according to ISC2, that does not stop it from being massively complex, with a wide range of titles, duties, and skill sets. If you are skeptical on this point, take a gander at the following security and career roadmaps:
The slice of 'cyber' I occupy is difficult to categorize within these two maps accurately, so let's stick with 'cyber security engineer'. If we take ZIPPIA's cyber security demographics at face value and combine it with ICS2's 2023 Cyber Workforce study, then I represent not even a single percent of the field of 'cyber'. The USA employs ~1.5 million cybersecurity professionals (ISC2), and the percent of the cyber workforce with the title 'cyber security engineer' is 14,773/1.5mill = 0.00984 or 0.9%.
I mention this only to point out that I have a limited perspective into the field, and my experience may be very different from others. I hope this keeps the critics at bay. That being said, let's get into why I look forward to my AI steak.
When I try to visualize what my job will look like in 5-10 years I foresee LLMs handling the tedious, repetitive, and boring tasks that I currently do by hand, or have partially automated with scripts. I am not alone - ISC2 polled 1,123 members on AI in cybersecurity and a majority think that AI will not only improve their job efficiency but make some parts of their job obsolete!
Some may view this negatively, perhaps as a herald of job loss. And, while I tend to resist change in my own life, I usually come around to seeing change for the better. Not this time - I think this is a GREAT thing from the get-go! Why? One word. Repetition. Let me take you back to that plate of mine that was 60-70% broccoli. The primary driver of all that broccoli is the repetition of tedious tasks. But I've read Al Sweigarts Automate the Boring Stuff with Python, so why don't you just do that? You might ask, and well that’s because these 'broccoli' tasks are just complex enough that you can't quite script them to 100% accurate completion. And, even if you did, not only would maintaining that level of accuracy quickly consume significant bandwidth for other tasks but it could also be made redundant by future changes in requirements. You see, some organizations are required to stay compliant with the Department of Information Security Agency's (DISA) Security Technical Implementation Guides (STIGs). If you aren't familiar with STIGs, they are essentially a list of controls that defend against specific risks and vulnerabilities either for specific applications or for general categories. Sounds easy enough right? The issue is staying compliant as new versions are released and the wide range of input required to meet each STIG accurately, and according to your organization's standard operating procedures (SOP). In practice, this can mean assessing thousands of different rules across various types of products ranging from databases, webapps, and operating systems. Usually, by an understaffed cyber team. After all, only ~30% of global cybersecurity professionals say their organization has the right amount of staff to handle cyber needs.
I am not alone in my aspirations for LLMs to lighten my "plate". The industry is very interested in leveraging LLMs to multiply the abilities of an understaffed, and under-skilled workforce. The Cybersecurity and Infrastructure Security Agency (CISA) is currently exploring 12 different LLM use cases to improve its capabilities. One of which is cyber vulnerability reporting, which I touched on above. Just last year, John Miller (head of Mandiant Intelligence Analysis) and Ron Graf (Google Cloud Data Scientist) presented an interesting talk on how LLMs can aid in cyber threat intelligence at Blackhat 2023
Given the cyber industry's workforce woes, LLMs are the silver bullet (or silver plate if we are sticking with my broccoli and steak analogy) for this problem, right? No, LLMs aren't going to be a perfect silver bullet, but I believe they will alleviate some of the issues within the state of cybersecurity work - given time. I do think there are some issues to consider:
1. The ever-growing list of "Cyber" skills will need to include some level of LLM understanding. Contributing to an ever-ballooning 'cyber skills' shortage. Cyber teams will need to have an understanding of how LLMs work if they will be implementing them as part of their solutions/ workflow.
For me, I honestly still don't have a firm grasp on the inner workings of LLMs. It's not quite magic, but the best "explain like I am 5" I can give is that there isn't any there there. The LLM is just imitating and parroting a response based on the given input. It isn't actually assessing the contents of your question, but rather making guesses about what kind of response 'fits', or might 'make sense'. This leads to issue #2.
2. Creating systems and processes that can handle the errors and hallucinations of LLMs. Ideally, in such a way that also improves the LLM error rate.
3. Implementation and proof of concept of actual, useful, LLM-driven systems.
From the CISA page discussed earlier to the dreams of an unknown, first-time blogger - these are great IDEAS, but they will remain wisps of hope until someone ties all the pieces together.
There are many more concerns I did not list above, like ensuring private access to the LLM, creating accurate training data, preventing poisoning by some logic plague, etc. I think the above three issues are the most pertinent to me. The industry is still grappling with these ideas and coming up with solutions. I look forward to exploring these further, at some point in the future. I know one thing for sure though, I for one, am looking forward to my AI steak.