I wrote the AI plan for my $2B Tech Company, don’t geek out
AI is like any other extremely powerful technology — it can be used for good or for bad, depending on who’s managing it and who has access to it unsupervised and/or in the shadows. You don’t need to know the intricate details of the technology at this level. If you focus too much on the details you will almost certainly miss the forest for the trees. Actually, the more you know about the details of the technology, the more likely you are to miss the forest, because typically highly technical people do NOT respect those who focus on non-technical issues.
Take Nuclear Technology for example (Fission / Fusion). The most important considerations include: where does the Uranium come from, what are the various categories of technologies and what skills and resources are needed to put this together, it is feasible to track any of the materials or technologies, how far away are we from controlled fusion, what are the launch rules of nuclear weapons based on various scenarios.
And there are many other examples: Fire, Drugs, Guns, Social Media & Propaganda, even Education (think China’s indoctrination vs. US)
My task in creating a plan and policy
This is focused on corporate jobs (e.g. not retail, nursing, etc.)
I’m in charge of Cybersecurity and Risk Management for a $2B Tech Company and I was assigned the task of creating the AI plan and policy.
I’ve always naturally looked at various factors for any project — technical, human, short term vs. long term, impact on both people and the bottom line, competitive, etc. Probably in large part because I’m trans and I’m also an musician entertainer. Even though I love having discussions about the societal impact of AI, I didn’t even think about it when I was creating this plan. Probably because it’s so baked into my thinking, but also in balance with the fact that I’m getting paid to represent the company’s best interests — I actually have a legal fiduciary requirement to do this.
So given this desire to achieve a balance, I started with the how hanging fruit, which is to do what I always do, analyze the problem with ALL the issues and facts, including the soft facts such as impact on workflow and people.
Firstly, I hate to say this so bluntly, but if our competitors use AI and we don’t, we will likely be at a fatal disadvantage. So I couldn’t start with societal implications in my calculus. But I could consider what is a big issue and what really isn’t.
But before we jump in, consider this quote I heard from Robert Greene
More harm is caused in this world by stupid incompetent people than by evil people
So let’s start there, because unnecessary disruption and displacement is the big juicy low hanging fruit. And furthermore, failures on big projects are typically caused by so many people’s real objective — get more power, and their ability to delude themselves into believing that this is not the driving factor of their decision making and behavior. Throw in poorly designed metrics and you have a witches brew of BS. (e.g. my company was measuring the success of Agile based on how quickly they switch to “Agile” vs. metrics like time to market, reducing reworks and failures to deliver what the customers want). Once we take care of this, and do it above board, we can think about the society implications and whether we need to / can be a leader in that higher realm/
As a Risk Management and Cybersecurity leader, who is also a HW and SW engineer, I spend more of my time with these issues than anything. I rarely get into the details of the tech — the lowest level I usually get to is API design. But I will drill deep into the tech details if I need to in order to help somebody or to fire a warning shot across somebody’s bow if they’re being uncooperative (which is usually a tactic to avoid accountability). I have more in common with a lawyer, cop, and FBI agent then I do an engineer. It’s a real pain to get people to stop CONSTANTLY trying to switch focus to bogus metrics and deflect conversations into irrelevant details to avoid accountability. You have to be very tough and have support of senior management, because few people understand complex technologies like security and AI, and thus it’s really easy for people to weasel their way into a scenario which is really mostly just benefiting them personally. And consider that in MOST cases, they’re not even aware of what they are doing and they will consistently resist any processes to facilitate— they are geeking out because it serves them to do so…unless there is strong management…wink.
The Plan
I can’t give the details here, but I can give an overview. Think of this as an example, or to provide thoughts on things to consider. I broke it up into categories and issues in each category.
ENGINEERING DEVELOPMENT
- Coding — code snippets to provide ideas, or patterns, common functions, starting point for specialized HW, etc. Be careful because if you use the code directly it could have bugs or not provide the functionality you really need integrated with your architecture and roadmap. Not to mention potential copyright and licensing violations.
- Coding — ensure coding guidelines/policies are being followed, provide assistance during coding
- Coding — refactoring suggestions. Again, you can’t use the output directly without a developer’s involvement.
- HW designs have similar concepts (e.g. ASIC designs, common patterns, etc.)
- Better conceptualization and project management on top of tracking tools such as Jira.
- High level analysis of architecture (e.g. where are critical assets like keys managed throughout their entire lifecycle)
- Assistance with code reviews — not replacing it
- Performance profiling and recommendations (e.g. analyzing code and putting in performance measuring hooks)
Who does this displace / enhance?
- Displaces junior engineers. Or does it? Maybe they get up and running quicker? This is something to watch out for — certainly from the perspective of the profile of hires you need.
- Reduces the number of mid level engineers you need?
- Enhances everybody’s job’s productivity, and grows revenue with the same headcount? Is that displacing engineers? Maybe it will displace some architects because more geeks will be able to think like architects.
TESTING AND SCANNING
- Enhancing test automation. Some types of automation just aren’t feasible without AI.
- Enhancing existing scanning tools (buy / build) — finding certain vulnerabilities might not be effective or feasible without AI
- Evaluating workflow across all scenarios — Quality of user experience, security, etc.
Who does this displace / enhance?
- Definitely more junior engineers, but same issues as above — it could just change the nature of their job and uplift and enhance everybody’s job and grow revenue.
BAKED INTO THE PRODUCT FOR CUSTOMER USE
This is very specific to the product, but it broadly includes items such as
- Wizards to help you mange configuration
- Alerting you to problems, anomalies (e.g. suspect behavior, signs of a bug, security concerns)
- Preventative maintenance (e.g. signs that you need more capacity, something is deteriorating, performance degradation and patterns, etc.)
Who does this displace / enhance?
- Reduces the number of development engineers needed on such features. Maybe, but it could increase it too. It might not be feasible to implement many of these features without AI, so the engineers might be more focused on how to use AI tools to achieve the goals vs. the too difficult and infeasible task of writing it all themselves from scratch.
SUPPORT
- Self service features of knowledge base will be MUCH more effective
- Level 1 Support will be more effective using this self-service feature
- Level 2+ Support can see the above AI queries and answers, summarize the situation, and collaborate with the Support person / engineering on drilling down into potential resolutions. And then when done, put this back into the knowledge base.
Who does this displace / enhance?
- Very low level support people who get most of their information from digging around in the current knowledge base
- Available resources will redirected towards higher level support people.
MACHINE LEARNING
- For each of these, what is the opportunity for the AI to learn and does it involve the user actively.
What about the company vs. people / society
Now that we have considered interests from all the sides, we can consider this. Again this depends on who owns / manages the company (e.g. stock market / pension funds vs. controlled by an individual), so let’s consider 3 scenarios:
- Bumbling geek idiots who can’t even get the low hanging fruit, some of whom may also be successfully focused on their own personal gains at the expenses of masses of others. Let’s assume we won’t work for such a company — there are ways to tell if people are in the wrong headspace (I’ll blog on that later).
- Competent managers who go for the low hanging fruit, but don’t really care about society one iota. (They might say they do for PR reasons, but they really don’t and it’s reflected in their behavior)
- Same as #2, but they do actually care (at varying levels).
If we look above at who this technology appears to displace (at this time, it’s hard to say what comes next, we’ll have to keep that in mind over time), what conclusions could be draw
- This is complicated, go for the low hanging fruit first competently, don’t geek out and screw it up for everybody but maybe yourself. Hold other people accountable. Think about your hiring mix, training, etc.
- This will result in a better product and experience for the customer
- This will enhance many people’s jobs, and will require almost everybody is familiar with AI tools at least at some level, as a core skill. Just like learning basic computer skills back in the 80’s. If you didn’t want to — that’s on you. Companies should provide training, and do it well- do not geek out and simply check it off your list! (I’ve seen too much training that dives into the technology details inappropriately).
- People who are sloppy, lazy, and irresponsible will have a harder time hiding and getting/keeping their jobs.
- It will definitely disadvantage less experienced people whose jobs will be affected first, but keeping up with the times, even for somebody who is uneducated is very doable — if they have the right guidance on where to focus. (That’s the hard part).
- It’s hard to say whether net-net this is good for the job market or bad. In general with globalization, the job market is going to be harder on those people who don’t keep up and/or are significantly disadvantaged or lazy. And keep in mind who disadvantaged people tend to vote for. And who’s fault is that?
- Always keep an eye on the correct metrics and trends, and hopefully you have management type #2 or #3, preferably #3 if that’s feasible.
What could a company do at a society level? What companies SHOULD BE (but rarely do), which is to speak up and basically tell the truth. If something is a competitive necessary or your shareholders are demanding it, and it is making you uncomfortable or look bad, because it’s not good for society, you should lobby for a law or a policy and explain why. But few do that because they have a fiduciary responsibility to increase the company’s profits, not to save society. If they’re clever, they can combine these together, but that’s a very rare find. Plus there’s a liability to taking action, because somebody is going to use your honesty and actions against you. There needs to be a better way to address this — but that’ a topic for another day.
That’s about all we can do right now as we get started — take the low hanging fruit and do it well. There are a huge number of considerations and we need to get on with it and do the best we can, The genies we spring will NEVER go back into the bottle. (Maybe they ought to, but they won’t — just like entropy, you can’t change it). Whether or not it will create problems with mid to higher level jobs to the point where the unemployment rate shoots up, time will tell. It’s also not clear if the current struggles in the job market is significantly affected by AI (Other than the crap reliance on job listings).
Maybe we’ll eventually end up like Star Trek, where people can live a life of pursuing their science, exploration, art, and dreams instead of worrying about money. We can only hope. But if we even want a shot at something like that, we’ll have to go down this path with focus, transparency, and empathy. These are the most important skills required for a cybersecurity and risk manager, and for most other jobs as well.
======================
And check out my book — which is about being trans, but it’s MUCH more deep than that.