@adamshostack@infosec.exchange
Toots are my opinion and not those of my company or any of the institutions I'm affiliated with.
Boosting is not an endorsement.
Author, game designer, technologist, teacher.
Helped to create the CVE and many other things. Fixed autorun for XP. On Blackhat Review board.
Books include Threats: What Every Engineer Should Learn from Star Wars (2023), Threat Modeling: Designing for Security, and The New School of Information Security.
Following back if you have content.
Toots are my opinion and not those of my company or any of the institutions I'm affiliated with.
Boosting is not an endorsement.
Heck, maybe it's time for an #introduction.
I'm Adam Shostack, a leading expert in threat modeling and secure design. I wrote Threat Modeling: Designing for Security, Threats: What Every Engineer Should Learn from Star Wars and co-authored The New School of Information Security.
I've been doing appsec for over 25 years from startups to Microsoft. These days I spend most of my time helping organizations develop effective threat modeling programs through coaching and training.
Early in my career, I helped create vuln scanning as a product category (sorry) and the CVE (not sorry). My second startup, Zero-Knowledge created awesome privacy systems. While at Microsoft, I created the SDL Threat Modeling Tool and the Elevation of Privilege card game. I also pushed the autorun fix to Windows XP and Vista, preventing tens of millions of infections.
I'm on the Review Board for Blackhat, the Steering Committee for the Privacy Enhancing Technologies Symposium/PoPETS and the advisory boards for IriusRisk and KeyCaliber. I'm a proud OWASP and ACM member.
I'm also an Affiliate Professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and was a Co-Director of the Cyber Lessons Learned project at the Belfer Center.
In my spare time, I like to βββ and βββββββββββ, and also protect my privacy.
Does anyone have a tool that'll convert https://www.usenix.org/conference/usenixsecurity25/technical-sessions#switcher into a useful one page at a glance bit of paper?
Just had to send @violetblue@mastodon.social βs βwhat to do if you catch covidβ to one of my last novid friends, who got sick at hacker summer camp. https://www.patreon.com/posts/86871700?collection=1162 stay safe folks.
Timo Jagush presenting on off boarding at #soups2025 , points out that frameworks are hard to navigateβ¦ framework creators have every motive to be βcomprehensiveβ, but little motive to be usable.
https://www.usenix.org/conference/soups2025/presentation/detsika
Yizhu Joy presenting at #soups2025 on LLM Agrnt explainers of spam. Uses FTC data β¦
Itβs surprising to see no mention of ecological validity, LLM flaws, or researcher-pleasing effects. #soups2025
Yizhu Joy presenting at #soups2025 on LLM Agrnt explainers of spam. Uses FTC data β¦
Lyft is set to allow location services βwhile usingβ. Wtf happened here? Is βrunning in the backgroundβ using? Do I need to kill apps to make that work? (I used it this morning to get to the airport)
It shows as the 2nd app in swipe-up view, but I didnt open it.
Lyft is set to allow location services βwhile usingβ. Wtf happened here? Is βrunning in the backgroundβ using? Do I need to kill apps to make that work? (I used it this morning to get to the airport)
One of the hats I wear is editor for the @defcon@defcon.social Franklin Hackers' Almanack. If you see talks that policymakers should know about, please let me know here, tag me, etc.
I'm already seeing great stuff on voting security, resisting back doors, irresponsible behavior by thin-skinned vendors.. what else should I see?
https://defconfranklin.com/
The frenzied activity here at @defcon@defcon.social is just a sight to behold!
The "groundbreaking" NIST report is on... a hackathon where the devs are available?
The dream of the 90s is alive in the media.
"βIf the report was published, others could have learned more information about how the [NIST] risk framework can and cannot be applied to a red teaming context,β says Alice Qian Zhang, a PhD student at Carnegie Mellon University who took part in the exercise. Qian Zhang says the exercise was particularly rewarding because it was possible to engage with makers of tools while testing them."
https://www.wired.com/story/inside-the-biden-administrations-unpublished-report-on-ai-safety/