Monday, August 1, 2022

VoWifi leaking IMSI

This is mostly a copy of the working group two blog I worked for when the research was done into the fields of imsi leakage when using voice of wifi for cellphones.


4G offers more services than the earlier generation such as 3G and 2G. One of the services has really have gained traction later years is VoLTE (Voice over LTE) and VoWifi (Voice over Wifi) that we will go more in dept regarding security.

VoWifi is beneficial in terms being able to use any Wifi connection offering public internet access thus extending and improving the coverage and connectivity. Think of it as building our own cellular network, but using commodity wifi components instead and avoiding the strict regulation and licensing of spectrum.


What is IMSI

The international mobile subscriber identity (IMSI) is a number that uniquely identifies every user of a cellular network. It is stored as a 64-bit field and is sent by the mobile device to the network. It is also used for acquiring other details of the mobile in the home location register (HLR) or as locally copied in the visitor location register. To prevent eavesdroppers from identifying and tracking the subscriber on the radio interface, the IMSI is sent as rarely as possible and a randomly-generated TMSI is sent instead.


Security implications

The IMSI is a secret identifier stored on the sim and can be exploited in many ways once known. It is bound to the sim, so changing UE will not help.

Examples:

    • Locating user (UE)
    • Intercepting calls
    • Intercepting SMS (stealing two factor pin eg.)
    • ..and more

How VoWifi works

When your phone is connected to datanetwork and with volte and vowifi enabled the device (UE) establish sip session directly to packetgateway via 4g, and via public internet to epdg (evolved packet data gateway) which is in essence a ipsec (ikev2) termination using SIM-AKA to authenticate UE. The ipsec comes into play since epdg is exposed publicly.

We won't go further in depth for volte and vowifi since there are already excellent articles about the matter:

VoWifi topology

To enable VoWifi on your device, please refer to your device manufacturer website:

Also check on your operator website if VoWifi is supported in your region. Please note that VoWifi is usually blocked when roaming.


EPDG exposed on the public internet

The Evolved Packet Data Gateway needs to be publicly available on the internet since UE needs to access it from an arbitrary no-trusted connection. The ipsec will secure and encrypt both the data and maintain the integrity of the connection throughout the session.

The UE finds the epdg termination point by looking dns records partly following a convention decided by 3gpp and typically looks like this in DNS:

epdg.epc.mnc999.mcc999.pub.3gppnetwork.org. 3488 IN A 1.2.3.4
epdg.epc.mnc999.mcc999.pub.3gppnetwork.org. 3488 IN A 5.6.7.8

From the DNS records we recognize the network operator (MNC) and the country code (MCC).

The DNS records are registered under a delegated domain owned by GSMA and usually are redelegated to operator under their own umbrella, like the example mnc999.mcc999.pub.3gppnetwork.org.


The problem

When UE establish session to epdg it uses a vpn, a ipsec relationship using IKEv2 for authentication, encryption and integrity.

So far the implementation works as intended and provides good security through encryption and security.

The problem is not the VoWifi per see, by rather how ipsec establish the session. When UE connects to the epdg, it acts as a initiator and the epdg is inherently passive since it cannot know where from (ip) the UE will come from.

sim-aka flow

EAP-AKA exposes identity

Vowifi as mentioned earlier utilises an encryption protocol based on the widely adopted Extensible Authentication Protocol. EAP itself is just a protocol and does not define the contents of the data or how exact the data exchanges look like. EAP-AKA unfortunately exposes the unencrypted user identity during the authentication session and in this case the user identity is equal to the imsi.

Solution

This is the hardest problem to solve since it needs a security layer or settings that comes before ipsec starts to connect.

The proposed solutions

    • Force the use of conservative peer for eap-aka/sim and use pseudonym identity (tmsi) to avoid exposing imsi.
    • Enable EAP-TTLS in addition to EAP-AKA/SIM
    • Only connect trusted/encrypted AP's

Fake Ipsec termination exposes identity

By impersonating a epdg by redirecting all dns requests for any pub.3gppnetwork.org. to our own fake ipsec termination providing just enough to catch the imsi.

A raspberry pie can easily be setup to constantly scan for open wifi networks, then impersonating the ssid in hope of lure ue's to connect. Any UE set to use VoWiFi connecting to the fake access point will give away their imsi.

A wifi ssid scan example of what we can automate. In this case the Isfjell-Guest would have been picked to catch imsi since its open and unencrypted:

wifi-ssid

Snippet from the ipsec termination, UE (iPhone 8) exposes imsi several times:

13[ENC] parsed IKE_AUTH request 2 [ EAP/RES/AKA ]
13[IKE] '09999994511******@wlan.mnc999.mcc999.3gppnetwork.org' is not a reauth identity
13[IKE] '09999994511******@wlan.mnc999.mcc999.3gppnetwork.org' is not a pseudonym
13[IKE] received identity '09999994511******@wlan.mnc999.mcc999.3gppnetwork.org'
13[IKE] no EAP key found for 09999994511******@wlan.mnc999.mcc999.3gppnetwork.org to authenticate with AKA
13[LIB] tried 0 SIM providers, but none had a quintuplet for '09999994511******@wlan.mnc999.mcc999.3gppnetwork.org'
13[IKE] failed to map pseudonym/reauth identity '09999994511******@wlan.mnc999.mcc999.3gppnetwork.org', fallback to permanent identity request
13[ENC] generating IKE_AUTH response 2 [ EAP/REQ/AKA ]
13[NET] sending packet: from 192.168.17.1[500] to 192.168.17.24[500] (92 bytes)
09[NET] received packet: from 192.168.17.24[500] to 192.168.17.1[500] (140 bytes)
09[ENC] parsed IKE_AUTH request 3 [ EAP/RES/AKA ]
09[IKE] received identity '09999994511******@wlan.mnc999.mcc999.3gppnetwork.org'
09[IKE] no EAP key found for 09999994511******@wlan.mnc999.mcc999.3gppnetwork.org to authenticate with AKA
09[LIB] tried 0 SIM providers, but none had a quintuplet for '09999994511******@wlan.mnc999.mcc999.3gppnetwork.org'
09[IKE] EAP method EAP_AKA failed for peer 09999994511******@nai.epc.mnc999.mcc999.3gppnetwork.org
09[ENC] generating IKE_AUTH response 3 [ EAP/FAIL ]

Solution

Ipsec clients (UE) are able to verify the identity of the epdg by requesting and validating a machine certificate proving it is the actual service belonging to the requested dns address. This means when the client connects, the server has to provide a valid certificate containing the dns names and signed by a trusted CA.

Raspberry PI 4

The specific physical setup used for testing. Older PI's should work just fine and also other platforms that can run dnsmasq, tshark and strongswan for ipsec with support for eap-aka/sim.

rpi4 and battery

white box is the rpi4 in a original casing and gray box is a battery bank


Tuesday, March 1, 2016

Blockchain and SAML


Emperor's new clothes Blockchain and SAML provides an interesting use-case where the increased integrity can be used to strengthen probability and/or to provide a valid revision history that have not been tampered with (within reasonable doubt).

While the data available in a SAML context is very limited it can be together with other supporting tools such as a SIEM provide enough data to prove either or not whatever happened.



Eg. the following text during a successful login

Feb 29 14:17:47 simplesamlphp NOTICE STAT [1dddb4dd04] User 'test' has been successfully authenticated.

sha256: 9a5780abcd0957eb3fc6b69592985b08ef0883decb28901a28c6ad1cf0aa8c36

And the following for a two factor in addition to l/p

2016-02-29 14:17:47 | [1dddb4dd04] 4xxxxxx5

sha256: e8a6bdf19eaa2551a76cc8583149153dc7e2cdceae4f56a330eb07a2034c3341


sha256 +  twofa log
0519e13f6e9afccaea907e8f9f3df007529c2fbbe216c45f8e2cbc5036cce34d

This could be stored in a asset named 'test' since the context is a user with the userid as primary key. The further chaining can follow the within the same asset or be extended to also include more context such as source or destination (webpage).

Sunday, December 27, 2015

Plotting Ceph topology

using php, javascript with d3js and a hint of bootstrap to beautify it. Crushmap contain a routing specifics for handling what one usually calls tiering the sense of using different storage type classes.


Generated from the following raw nodelist parsed from the ceph export. 

id: 0 name: device0
id: 1 name: device1
id: 2 name: device2
id: 3 name: osd.3
id: 4 name: osd.4
id: 5 name: osd.5
id: 6 name: osd.6
id: 7 name: osd.7
id: -1 name: default root
id: -2
id: -3
id: -4
id: -5
id: -6
id: -2 name: ceph2 host
id: 5
id: -3 name: ceph3 host
id: 7
id: -4 name: ceph4 host
id: 3
id: -5 name: ceph5 host
id: 4
id: -6 name: ceph1 host
id: 6
id: -7 name: trd region
id: -8 name: hw rack
id: -9 name: vm rack


Monday, November 23, 2015

(Rolling) restart of elasticsearch datanodes

elasticsearch 1.7.3





Planned restart of the data nodes should include stopping the routing of traffic to avoid unnecessary rebalancing and prolonged recovery period when node rejoins the cluster.

Example:
Stop the routing:
PUT /_cluster/settings
{
    "transient" : {
        "cluster.routing.allocation.enable" : "none"
    }
}

Should reply with:
{
  "persistent": {
  },
  "transient": {
    "cluster": {
      "routing": {
        "allocation": {
          "enable": "none"
        }
      }
    }
  },
  "acknowledged": true
}


Stop and do whatever you need to do, then start the node and wait for the cluster reporting the node rejoin in the logs:
[2015-11-23 01:18:32,623][INFO ][cluster.service          ] [servername] added {[servername2][2DwlAl3SAe-aijdas1336Ew][servername2][inet[/1.1.1.2:9300]],}, reason: zen-disco-receive(join from node[[servername2][2DwlAl3SAe-aijdas1336Ew][servername2][inet[/1.1.1.2:9300]]])

When to re-enable routing is a question on how busy the cluster is. Reenabling will cause additional load and stress and myself have been delaying this for some hours until a suitable occasion appears. Must stress that the documents the rejoined node has will not be visible to the cluster nor will the rejoined cluster in any way offload the rest since data it has is not present (of course).

The cluster might also depending on shard and replication setting not be redundant during the routing allocation is off, what the overall status; orange ok during transition.

To reenable the routing
PUT /_cluster/settings
{
    "transient" : {
        "cluster.routing.allocation.enable" : "all"
    }
}

Watch the status to become green and continue to next node if needed. The time for the cluster to become green, that is in this context for all shards and replication criteria to be met varies highly with the capacity of each node, the overall load and not at least the number of documents. Eg. a 3 node cluster with 200M doc's should spend somewhere around 10 minutes for recovery, not more.

GET _cluster/health
{
  "cluster_name": "clustername",
  "active_primary_shards": 56,
  "active_shards": 112,
  "number_of_data_nodes": 3,
  "number_of_in_flight_fetch": 0,
  "number_of_nodes": 5,
  "unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "timed_out": false,
  "delayed_unassigned_shards": 0,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "status": "green"

}


Thursday, November 5, 2015

How to replace a headset

Replacing the headset is usually very straightforward and simple, given you have acces to the right tools which you will see in the pictures below. The two main tools are the extractor and the pressfit compressor. Teoretically one can do without these tools, but at risk of damaging the frame. The frame below is a Scott CR1 carbon and I would say that doing this job without the tools a damaged frame would probably be the result. The pressfit tends to sit very hard since the are to a difference from the bottom bracket in aluminium.

The reason for replacing the headset was a rusty lower bearing, the headset startet to bindly shortly after use and that lead me doubt the quality. I continued to use the bike rest of the last season and this, but it got worse and worse affecting the safety, especially noticable during fast decents.


Dismantle the stem in the usual way, first by removing the rings and the top.


Be sure to hold the fork while lossening up the stem to avoid it falling to the floor.


The fork should come loose and it some of the further service might require to dismantle the front break


Top bearing is looking good, almost no rust and do not really need replacement.


Inspect top and bottom to see if there are any wear, cracks etc in the carbon


I purchased a new complete high quality sealed BBB headset, the original Ritchey was not sealed and I believe that could have accelerated the wear and tear since replacing the headset should only be done every 3-5 years. 


This is the pressfit extractor, it enables your to apply force on the pressfit itself and not the frame when hammering down.


Insert the extractor backwards


Be sure that the blades on the extractor engages inside on the edges of the pressfit. If a rubberhammer dont do - use a metall hammer, but be carefull and take your time to avoid damaging the frame


The bearing can usually be removed without forced since they are only held in place by the fork that is now removed. As you can see, the bearing was very rusty and no wonder why it caused binding.


Same process for removing the top pressfit, notice the metall hammer :)


Inspect the inside for reasons mentioned earlier. The white stuff you see inside the carbonframe is normal and are remains after the molding.


Wipe and clean


Notice the remains after the old bearing, this a seal and must be removed since it do not fit with the new bearing.


Removal was in this case hard, since the rust had glued it to the carbon


Careful prying got it moving after a while



There are different oppinions of what to do with the surface between pressfit and carbon. Some say to keep it dry, but I prefer to lube it up - in this case using lithium grease for longevity and resistance to moisture.


New pressfits alligned with the compressor aligned, be careful when tightning up to keep everything alligned.


New lower bearing inserted onto the fork


New top bearing inserted


Insert the spacers and thighten up by hand, be carefull not to tighten to hard. It should now bind when turning and there should be no slack.


There, done!


Go biking!

Wednesday, November 4, 2015

Elasticsearch and stemming

Main use of Elasticsearch is storing logs, and lot's of them. Searching throught the data suing Kibana frontend is awesome and one usually finds what one is looking for.

But lets have a bit fun by using the stemming in Elasticsearch.

Elasticsearch provides good support for stemming via numerous token filters, but lets focus on the Hunspell stemmer this time.

Els have the stemmer already (v1.7.3), but do not have the words. So first step would be to get the dictionaries and install them, not going into details here. Bounce the cluster (yep, all nodes) and be sure that the dictionries loads in nicely.

By default, newly created indices do not use stemming, thusly one have to set this when creating index.

put /skjetlein

{
  "settings": {
    "analysis": {
      "filter": {
        "en_US": {
          "type":     "hunspell",
          "language": "en_US" 
        }
      },
      "analyzer": {
        "en_US": {
          "tokenizer":  "standard",
          "filter":   [ "lowercase", "en_US" ]
        }
      }
    }

If the dictionaries are missing from one or several nodes, you will receive a failure notice.

Othervise:
{
  "acknowledged": true
}

Verify the settings

get /skjetlein/_settings

{
  "skjetlein": {
    "settings": {
      "index": {
        "uuid": "ny3n0uJMRKywpvy6OCRmLw",
        "number_of_replicas": "1",
        "analysis": {
          "filter": {
            "en_US": {
              "type": "hunspell",
              "language": "en_US"
            }
          },
          "analyzer": {
            "en_US": {
              "filter": [
                "lowercase",
                "en_US"
              ],
              "tokenizer": "standard"
...

Lets test the stemming,

get skjetlein/_analyze?analyzer=en_US -d "driving painting traveling"

...output should be something like this:
{
  "tokens": [
    {
      "token": "drive",
      "start_offset": 0,
      "end_offset": 7,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "painting",
      "start_offset": 8,
      "end_offset": 16,
      "type": "<ALPHANUM>",
      "position": 2
    },
    {
      "token": "paint",
      "start_offset": 8,
      "end_offset": 16,
      "type": "<ALPHANUM>",
      "position": 2
    },
    {
      "token": "traveling",
      "start_offset": 17,
      "end_offset": 26,
      "type": "<ALPHANUM>",
      "position": 3
    },
    {
      "token": "travel",
      "start_offset": 17,
      "end_offset": 26,
      "type": "<ALPHANUM>",
      "position": 3
    }
  ]
}

Ie. result is drive, paint and travel. Looks good.

So what can the usecase be in the context of elasticsearch, which usually is storing wast amount of logs (events) ? Well, lets say i search throught the logs for problems with filesystems. Elastic as-is would require search strings that include every possible word related to filesystem and since logs, in this context such as syslog do not provide the information is such a way that the search could be expressed in a constistant way.

Eg. filesystems can be stemmed to 'filesystem'
"custom_stem": {
          "type": "stemmer_override",
          "rules": [ 
            "ext2fs=>filesystem",
            "nfs=>filesystem",
            "btrfs=>filesystem"
...

or
           "postfix=>mail",
            "smtp=>mail",
            "qmail=>mail"



Friday, October 30, 2015

Logstash filters

It is not always a good thing to have options, many of them. I tend to start thinking about all the combinations, the plus and minus for all aspects. And how about the future, what possible negative consequences could it have if I choose a instead b now, how hard would it be to change back?

Diving into logstash, and not to mention logstash-forwarder (lumberjack) is a daunting task. Its not difficult or hard to understand, by dealing with all the choises are.

Recently I had a dilemma, no big thing, but anyway. When to set "type" ?

But the real question is why setting type? Well, the typical use it to tell logstash how to deal and process the data. Do we really need the the type setting? Not really, but it simplifies the configuration and makes it more readable too.

I hastly setup logstash-forwarder on a webserver with large amount traffic and by not really thinking about any technical/architectual decisions the type on the client. When working through the pipline and finally configuring on logstash I noticed that type was allready set, but not to exactly what fitted my need.

Type was set on client to apache-access, the access log needs their own type declaration since the log format is different from eg. error log. But on the logstash I had set this to the more general type 'apache'. I could not just change this since logstash was allready receiving data from other servers in production.

So back to options. A neat thing with logstash-forward is the annotation of the object sent, if data comes from a log file, the object is annotated whence it came from. Then with some grok'ing it's easy to filter objects based on not only the set type, but also source file name.

Eg.
filter {
  if [type] == "apache" {
    grok { 
      match => { "message" => "%{COMBINEDAPACHELOG}" }
      match => { "file" => "%{GREEDYDATA}.access.log" }
    }
  }
}

VoWifi leaking IMSI

This is mostly a copy of the working group two blog I worked for when the research was done into the fields of imsi leakage when using voice...