KAWAiiCon 2019 Day 2 - Notes

Here are my notes from the second day of KAWAiiCon. This was a much more mixed day with a number of sparkle talks on various topics.

8 bit control of high voltage for musical purposes

Josh Bailey

Just cool Tesla coil music. It's really not possible to do this justice in notes, have a look at https://www.vandervecken.com.

There's been a lot of "look I made music with a Tesla coil, but what was a bit different about this one was working with musicians to build the sounds they'd actually need to make music with such an instrument.

The funny quote being:

We asked the school of engineering and they said no, so we asked the design school and they said yes.

Also amusing was Sput's description of the required health and safety forms to have the Tesla coil live at the venue. Apparently in one meeting

They asked "what's the worst that can happen?

And Josh, totally straight faced said "Well a spark could escape the Faraday cage and kill someone."

And the venue people paused a bit then just wrote down "Death" on the form and moved on.

Finding a poisonous seed

Negar Shabab, Noushin Shabab

Interesting talk, focus on bad compilers and linkers introducing malicious code into legit builds rather than supply chain security on libraries and the like.

Interesting idea but exploit is through the dev using compilers from untrusted sources or linkers with broken sigs, concerning that that didn't raise red flags. But important warning that the compiler is part of the supply chain.

Old idea came up in the 1983 Turin award given to Denise Richie & Ken Thompson for unix. The famous acceptance speech "Reflections on trusting trust".

Even before that USAF paper from 1984 on the risks even though compiling their own Multix.

There are a lot of actual recent attacks.

INDUC - 2009

  • Targeted Delphi compiler

xCode Ghost - 2015

  • Malicious xCode downloaded from Baidu

ShadowPad - 2017

  • APT crew that is still active targets windows linking

CCleaner

ShadowHammer - 2019

  • ASUS back doored targeting the gaming industry.
  • Nation State attacker
  • Complex chain
  • minimal change to linker, just adds an import no call
  • import is of a malicious library
  • library init code is the real malware.

First do no harm: getting security news stories on the road to recovery

Izzi @sneakybaker

Talk on engaging the media in order to raise the quality and accuracy of reporting on security issues. Izzi is PR at CERT NZ so has practice in this.

Stop blinding people with science, need to focus on telling stories, people react to stories.

Humans are driven by vales and ethics, not by facts. They wrap and present these as stores.

"We take security seriously" isn't good enough anymore, people need to believe and trust.

Media looks for good stories, but what is a good story?

  • Engaging and compelling?
  • True?

Different needs in different times and contexts.

How is news made?

  • Media Releases
  • Social Media
  • Connections and Relationships
  • Digging and pulling on threads

But what affects the story?

  • Experts, but what is an expert? There are lots of actual experts in infosec, but not necessarily putting their hands up to speak. Difficult for Journalists to know who is an expert, so falls too relationships that self build. This is how Bob from the computer store ends up on the news
  • Influence, stories are always sold based on relationships
  • Paid placements?

What we need to do, bring everyone on the security roller-coaster.

There are some very good tech journalists, but security stories are generally treated as human interest stories, not tech stories.

We need to share the knowledge: Explain Cred Dumps at a BBQ, and how to protect yourself rather than telling people to delete facebook.

Tell a story

Be a source of truth

  • Write blogs
  • Direct to CERT (if appropriate)
  • Correct mistakes in media articles

Seeing The Invisible: Finding Fingerprints on Encrypted Traffic

0x4D31 (adel) @0x4D31

Fingerprinting encrypted traffic by hashing various identifying fields of the unencrypted handshake.

JA3 method developed by sales force for TLS, concat certain fields and hash.

Focus of this research on RDP enhanced security is just TLS, use JA3. Standard security is a custom protocol built using RSA, RC4, 3DES, similar fingerprinting developed based on JA3.

Interesting patterns, One IP with lots of different fingerprints == randomizing the header to avoid fingerprinting. Makes it stand out like a sore thumb, opposite of the purpose. Repeating patterns in the handshake still allow detection of the actor.

The story of the "Uncrackable" Lockbox, and Why Hackers Need to Work Alongside Developers

Matthew Ruffell

Someone attempted to create a time locked file, and offered BTC for a test of the crypto. Broken repeatedly. The examples and breaks are on the blog https://ruffell.nz.

First attempt was just an if statement. Change the binary, decrypts.

Second attempt used the time as an input to the crypto, just input the expected value

Third attempt used bitcoin for timestamp, created fake bitcoin nodes.

Fourth attempt used bitcoin blocks, just break the hand rolled crypto.

Fifth attempt, layered hand rolled crypto, just break it in stages

The call is coming from inside the house - what data do you need?

Michelle Burke @smrtgirl

We don't think enough about what data we're collecting and why we need it. PII should be treated like radioactive waste, only deal with it if you really have to and try to deal with as little as possible.

Don't just go "grab everything, we'll work it out later", this is RISK, business RISK. If you're dealing with PII you now have obligations under the privacy act. If you're international you might be under GDPR. You must have a privacy officer, you must involve them. Just don't if you don't have to.

Great example, loyalty card sign up:

  • You ask for an email so you can contact
  • You ask for a name so you can personalize... um
  • You ask for Date of Birth so you can drive sales with birthday offer, just stop, full on PII

We all have multiple emails, fake DoB, internet names, but normal people don't!

Instead:

  • Ask how you'd prefer to be addressed, Michelle is OK getting emails addressed to Princess
  • Ask for birth month or star sign, can still drive the offer without needing the DAY

Minimize data collected to what you actually need.

Depression.org leaked depression questionnaire information via referrer header because they included a bunch of third party analytics including something that keylogs. Consider if you really need these in production, limit to ones that make sense (not keylogging).

Including 3rd party stuff is literally trusting randoms from the internet. Are you sure? Consider whether you really trust these third parties with your client's data.

Consider the chaining impacts, depression leak was only scores and IP, not PII, but the IP could be chained with e-commerce site that knows who you are. Bad!

3Fun, a dating app for threesomes stored all the data on the machine and filtered locally, exposed all kinds of sensitive information, including intimate photos since there were links to s3 buckets.

When you're dealing with PII remember that you're not dealing with 1's & 0's but people. Be careful!

Decrypt everything, everywhere

Birk Kauer @lod108 & Dennis Mantz (@dennismantz)

Talk on breaking into a hardware encryptor. Interesting tools to research. Details were a bit beyond me to be honest.

FRIDA fuzzer?

Super self-service: hacking kiosks using barcodes

Shaquin & Ben

Short version is:

  • remember that barcode readers can act as keyboards
  • barcode reader settings can often be changed using barcodes

So if your assumption is that your system is secure because the only exposed input is barcode reader it's not secure.

Barcode readers can do a serial mode or a keyboard emulation mode. Most can even do special characters such as shift, alt, windows key etc.

So you can do lot to a kiosk by

  • barcode that enables these modes
  • barcode that acts like a rubber ducky to run a command shell

Protections, disable these modes. But really assume your kiosk is untrusted. Don't plug it into the corp network. Because even if they kiosk can be secured then there is a network port that someone can plug their own device into.

A Security Tale

Fobski @Fobski

Fobski was the NZ head of security for Equifax NZ during the breah. NZ company recently acquired so systems were different. Tale of burnout and breaking people, you never want to go through this.

NZ office, and most of the global security team only found out about the breach when it was publicly notified, everyone in shock. Not the way t do this, give your staff notice.

NZ Assurance had just been audited so was up to date, allowed some more focus.

US head office prioritising remediation streams. 28 streams, NZ and other offices were expected to just prioritise these. BUT the NZ dev teams were working on compliance stuff required for NZ law, can't just stop that so Fobski was it. PM, doer, everything.

Supposed to have contractors but because of the acquisition had to use a different supplier, took 6 months to get any contractors on board because of internal process.

Because the sole contact, days were insane.

  • 6 AM, wake up, try to work out what has changed overnight from US and plan
  • Into the office, about 2 hours to self
  • 11 AM, aus comes online, changes stuff further
  • working late into the night
  • see something in the middle of the night, try to adapt

If your infrastructure is managed out of aus because of cost sharing they'll look after their own. NZ business needs won't be prioritised, had to beg, threaten lean on every personal relationships to get things done.

Basically have 8 months to implement complex compliance changes, more like a product build out than a project, working through a roadmap, adapting to changing scope etc.

Don't transition project methodology while doing this. Equifax had already transition in flight, NZ was doing SAFE but US head office was still doing a waterfall methodology. Dissonance between these was a problem, some of the US reqs made no sense because that context was missing.

Some unique problems in NZ. Moving to cloud means data sovereignty issues because means AUS based. Biggest fights were with the rest of the global security team, not with the business. "Why can't you just follow this global template and move to cloud?". Can't you just do it now and ask permission later.

Not prepared to break NZ law and compliance. Infrastructure here was good, use it.

You can't hide who you are under stress, All comes out. Fobski found that empathy was his superpower.

But this broke people. 2 people were hospitalized for stress. It broke Fobski.

This is the true hidden cost of these cost driven decisions, it's peoples mental health.

Liar, Liar: a first-timer "red-teaming" under unusual restrictions.

l0ss @mikeloss

Absolutely hilarious talk. Brilliantly done. I really can't do this justice in notes.

It's a story of Red Teaming under somewhat hilarious restrictions including things like "you can't lie to our staff". The wonderful half truths that lead to make a good story.

Compromising a server that was shutdown sounds like it broke the IT team a bit.

And then the reprise, called back a year later to test their new incident response process. The deadpan reading of the report of what they had to do to even trigger the incident response process had the entire audience in stitches.

Some of this was captured on YouTube: https://m.youtube.com/watch?v=1wJaKBSAxYU&feature=youtu.be

When I grow up, I want to be a scooter

Matthew Garrett @mjg59

Last year Matthew talked about all having fun with a scooter companies API in order to locate every scooter in the world, track people in real time and identify cities that the scooter company was going to launch in but had not yet announced. Importantly though, no crimes were committed.

This time round Matthew decided to dump and reverse engineer the scooter side firmware to see what fun he could have with the scooter facing backend API. The claim was made that this is not because he knows what he's doing but he's just bad at giving up, not sure how much I believe that.

As it turns out you can get a lot of information on devices with an onboard radio by looking them up on the FCC website as details need to be registered.

Both the Bluetooth controller and system on a chip have JTAG ish debugging interfaces, and were not configured to prevent the firmware being dumped.

Having dumped the firmware of the system on a chip and running strings on it Matthew found "free ride mode". Using Ghidra he was able to identify that the command to trigger this came straight from the Bluetooth chip.

Dumping the firmware of the Bluetooth chip it's split between the "virtual device" which is the Bluetooth API and the actual logic. Matthew was able to find the command that triggered free ride mode, but it requires authentication using some function with a lot of maths. Googling some of the random constants showed that they were AES S-Box values, so knew it was just AES. The "secret" turns out to just be all fixed constants or values that are given back by Bluetooth.

So Yay, free ride mode!

GuardRails: A tool to manage k8s securely at speed

Frenchie @nfFrenchie & Dustin Decker

This was an overview of a new tool that Cruise have open sourced: https://github.com/cruise-automation/k-rail

This looks to me really well thought out, it's about enforcing policies to prevent all the ways that people can deploy pods that might allow people to escalate privileges, escape the container and other common problems.

What really impressed me is the focus on making it work for developers, so immediate actionable feedback rather than requiring debugging and deep knowledge to know why your containers were blocked.

Has pluggable rules and a powerful white listing capability for exceptions. To be honest, something like this would make me a lot more comfortable with a K8S deployment.

Some of the common problems with existing rules:

  • Use of host networking which allows the container to act as the host rather than the pod.
  • Privileged containers which allow all the kernel syscalls which can be used for bad things
  • Containers with additional capabilities which have much the same effect
  • Containers running as root, just don't
  • Containers mounting the docker socket which is effectively root access to the host
  • Use of tiller, tiller is dangerous as it's effectively another unauthenticated API with full root access to the cluster
  • Mounting of host directories, again, lets messing with the host

Works with the higher level constructs such as deployments, daemon sets, stateful sets etc so gives immediate feedback with a useful description of what is blocked.

One bit takeaway for me is that Cruise doesn't use tiller, but does use helm. Tiller is generally an unauthenticated api within the cluster that prevents cluster admin, which is bad. This can be mitigated somewhat but the RBAC options are limited. So it's worth looking into how to get the benefits of Helm without tiller.

The Power of Poseidon: Uncovering Your Network & Becoming a Better Defender

Charlie Lewis

This is a bit beyond me. This was a live demo of Poseidon which is a traffic categorization and response tool that works with software defined networking.

Something to the effect of:

  • When a new devices is registered, the SDN is configured to provide a tap of traffic to Poseidon.
  • Poseidon fingerprints the traffic to work out what kind of device it is and what roll it fulfills e.g. a dev workstation etc.
  • ACLs are then applied automatically as appropriate for the categorization
  • The categorization also define what is normal traffic, so then somehow looks for anomalous traffic e.g. if a Printer starts making SSH connections that's not normal
  • Additional ACLs can be applied to anomalous traffic (or to the device when it is anomalous, not sure)

Meat and Three Segfault

DoI

DoI decided to find vulnerabilities in the game Super Meat Boy. Just as practice and explained the process. Particularly since this game has no multi-player it's an interesting exercise in how you attack something you don't expect to have an internet connection

DoI explained his thought process and tools so it makes an interesting primer.

First step was working out how it's possible to even attack the software, so started it with strace and clicked around. Strace tracks all the system calls made by the program.

Grepping the result for open showed all the files that were opened which are possible attack vectors.

Grepping the result for connect showed any network connections. A lot of unix sockets but surprisingly a connection to a mysql db on the internet. So that's a vector.

Started to build a dataflow diagram using python threat model pytm and also an attack tree to visualise the ways it could be attacked and how those could lead to the goal / chain of exploits required. Generally offensive security types just do this implicitly but it's a really good way of:

  • explaining the process to others
  • keeping notes about what you were thinking
  • explaining the level of risk when reporting issues

Ran the program through a disassembler to visualize the code flow and identify where the connection to the mysql database was coming from.

Now that we know how it can be attacked, fired up the program with gdb so that it could be debugged. Added a breakpoint on the msssql connection which made it easy to pull out the connection details including username and password.

Intercepted that so it was calling a DB under his control, and experimented with changing the schema. Got a stack overflow by changing the schema from a varchar to a clob and making the data much longer. Would need to compromise the database to exploit, but is an attack vector as if compromised could give remote code exec on all the client installs.

Then moved on to exploiting via the save game files, people often trade save games so would be possible to convince someone to run a malicious save game file.

Decided to use American Fuzzy Lop to fuzz the save game input file to see if he could trigger a crash. Sight challenge in that AFL expects the program to exit after input, where as a game continues playing when it doesn't crash.

So used cutter a reverse engineering tool to patch the binary so that it would exit after loading the file. That way AFL would either detect a crash (if fuzzing successful) or a clean exit.

Something about Dynamo Reo in context of AFL, not sure what this was, need to do some googling.

This worked, but it was slow, and the key with fuzzing is fast. So instead wanted to introduce fuzzing as a direct method call rather than a full program execution.

To do this, created a c library that would replace the read system call and put it in LD_PRELOAD environment variable so it would take precedence over the standard library. Used this to spawn a thread that would run fuzzing directly against the save game load function.

In order to get a reference to the save game load function, took the function location from the decompiler / visualization tool, then de-referenced that literal value into a function pointer.

So yeah, some stuff to try.

Comments !

social