I'm ruminating on spending the effort to write this up from a nuclear engineering perspective and posting it to LinkedIn, just for the grins.
I know, posting anything to LinkedIn is about as advisable as repeatedly striking oneself in the forehead with the pointy end of a geologist's rock pick. Completely inadvisable.
Our internal developer discussion today centered on discussing these issues, basically treating generative AI as a grievous security risk that nobody in our industry is prepared to handle. You thought Stuxnet and SCADA attacks were a major worry - bah! Those sorts of attacks are nation-state movie plot risks by comparison. Yes, they are real - follow @vncresolver@fedi.computernewb.com for some appalling fun - but they are rare, especially in nuclear. At least rare compared to the number of engineers whose managed work machines have Microsoft Office, Acrobat Reader, and similar AI-tainted tools installed by their authoritarian betters in corporate IT.
Basically our internal engineering and developer communities are working to raise awareness of the security risk of AI, how AI systems operate, the details of the vulnerabilities, the consequences, and what little defenses we have beyond personal awareness. With IT often locking things down to the point that we cannot protect ourselves or worse, actively enabling and propagating risk, we need to support each other in defending against AI.
I'm not sure that feeding content into LinkedIn could do anything positive but there may be a few engineers out there who still take their security responsibilities under 10CFR seriously.
Then again, the rule of law is dead in the US and trying to prop up its rotting corpse will just get me pegged (further) as an enemy of the state. No matter what I do, I'm going to end up in a camp, probably sooner than later.
And it really shouldn't be Us vs Them regarding IT. None of us want to cause a security incident, none of us want to make more work for IT, we all take security seriously, and theoretically we're all on the same page.
Except the authoritarian small-dick IT Guy attitude is what IT leads with. No conversation, no discussion, no understanding, no listening - always a command and a threat. Mall Cop training 101: Establish Control of The Situation By Acting Like A Big Dog.
Way to establish an atmosphere of trust and build effective security throughout the organization there Chief. "Play stupid games, win stupid prizes" works both ways but hey, you're the one who decided to communicate primarily through pendulous scrotum waving. Not much I can work with there.
I worked Ops for 13 years and dealt with my share of security incidents. I learned incident response, chain-of-custody, what you can and can't tell from logs decades ago at the LISA conference from People Who Had Seen Some Shit. People who traced intruders back through the modem pool only to lose them in archaic (analog?) switches that hadn't yet been upgraded. I was responsible for locking down and monitoring my systems. The fundamentals haven't changed and one of those fundamentals is to not be a dick to the people you're trying to protect. If you treat them like the enemy, they'll act like the enemy, and it will be your fault for making it that way. But that requires some amount of social ability and willingness to use it. Much easier to go all Cartman with RESPECT MY AUTHORITAAAAY!
Our IT department has been happily rolling out AI-enabled internal tools that nobody* asked for while simultaneously warning us about our internal policy against external LLM and generative AI use and (also simultaneously) not being able to firewall off or block traffic to Adobe or Microsoft's AI farms because they have a constantly morphing pool of AI servers to make them effectively unblockable.
Just like botnet operators.
Oh right. IT pays Adobe a license fee so Adobe can harvest my work and put me at risk of federal jail time. They also pay Microsoft a license fee to do the same.
This is the same IT department that does its best to lock down my workstation to prevent me from using unauthorized unsafe free applications instead of the authorized unsafe applications they paid money for.
I don't for a second feel our IT department is especially incompetent or evil in this regard. They don't seem too different from every other corporate IT department I've dealt with in the past 20 years or so.
Clearly if I suggest that corporate IT is the main problem, I'm the one with the attitude problem, I'm the security risk.
* i.e. our investors
Often with tone of "we'll fire and prosecute you to the fullest intent of the law" because since all the malls started closing, all the mall cops moved into IT. That's my theory anyway.
I'm ruminating on spending the effort to write this up from a nuclear engineering perspective and posting it to LinkedIn, just for the grins.
I know, posting anything to LinkedIn is about as advisable as repeatedly striking oneself in the forehead with the pointy end of a geologist's rock pick. Completely inadvisable.
Our internal developer discussion today centered on discussing these issues, basically treating generative AI as a grievous security risk that nobody in our industry is prepared to handle. You thought Stuxnet and SCADA attacks were a major worry - bah! Those sorts of attacks are nation-state movie plot risks by comparison. Yes, they are real - follow @vncresolver@fedi.computernewb.com for some appalling fun - but they are rare, especially in nuclear. At least rare compared to the number of engineers whose managed work machines have Microsoft Office, Acrobat Reader, and similar AI-tainted tools installed by their authoritarian betters in corporate IT.
Basically our internal engineering and developer communities are working to raise awareness of the security risk of AI, how AI systems operate, the details of the vulnerabilities, the consequences, and what little defenses we have beyond personal awareness. With IT often locking things down to the point that we cannot protect ourselves or worse, actively enabling and propagating risk, we need to support each other in defending against AI.
I'm not sure that feeding content into LinkedIn could do anything positive but there may be a few engineers out there who still take their security responsibilities under 10CFR seriously.
Then again, the rule of law is dead in the US and trying to prop up its rotting corpse will just get me pegged (further) as an enemy of the state. No matter what I do, I'm going to end up in a camp, probably sooner than later.
In either case, I'm still on the wrong side of the law. Back then, my pimply ass was taking the bread out of game developers' mouths.
Now Acrobat, Copilot, etc. are silently exfiltrating my employer's Export Controlled Information to unauthorized third parties when I use employer-supplied tools putting me in violation of export control laws and exposing me to federal jail time. Intent doesn't matter, knowledge doesn't matter. If ECI I'm handling gets out, it's my ass in the fire.
Our IT department has been happily rolling out AI-enabled internal tools that nobody* asked for while simultaneously warning us about our internal policy against external LLM and generative AI use and (also simultaneously) not being able to firewall off or block traffic to Adobe or Microsoft's AI farms because they have a constantly morphing pool of AI servers to make them effectively unblockable.
Just like botnet operators.
Oh right. IT pays Adobe a license fee so Adobe can harvest my work and put me at risk of federal jail time. They also pay Microsoft a license fee to do the same.
This is the same IT department that does its best to lock down my workstation to prevent me from using unauthorized unsafe free applications instead of the authorized unsafe applications they paid money for.
I don't for a second feel our IT department is especially incompetent or evil in this regard. They don't seem too different from every other corporate IT department I've dealt with in the past 20 years or so.
Clearly if I suggest that corporate IT is the main problem, I'm the one with the attitude problem, I'm the security risk.
* i.e. our investors
Often with tone of "we'll fire and prosecute you to the fullest intent of the law" because since all the malls started closing, all the mall cops moved into IT. That's my theory anyway.
Remember when the common wisdom was to not pirate commercial software because downloading random executables was likely to infect you with malware that would steal your data or worse?
Now we pay for legitimate, signed copies of software from vendors who load their applications with AI "features" that exfiltrate your data to remote servers and no way to disable that.
I feel old. When I started out, we got our malware for free and were doing something wrong when we did.
Maybe this is payback for all those C64 games I pirated when I was 16.
In either case, I'm still on the wrong side of the law. Back then, my pimply ass was taking the bread out of game developers' mouths.
Now Acrobat, Copilot, etc. are silently exfiltrating my employer's Export Controlled Information to unauthorized third parties when I use employer-supplied tools putting me in violation of export control laws and exposing me to federal jail time. Intent doesn't matter, knowledge doesn't matter. If ECI I'm handling gets out, it's my ass in the fire.
Remember when the common wisdom was to not pirate commercial software because downloading random executables was likely to infect you with malware that would steal your data or worse?
Now we pay for legitimate, signed copies of software from vendors who load their applications with AI "features" that exfiltrate your data to remote servers and no way to disable that.
I feel old. When I started out, we got our malware for free and were doing something wrong when we did.
Maybe this is payback for all those C64 games I pirated when I was 16.
This is Donut's chair now. #FelineFriday #CatsOfMastodon