Understanding the Security Impacts of the iPhone Call Recording App Vulnerability
News about a vulnerable call recording app for iPhone made the rounds in early March when TechCrunch ran an article about the event. “Call Recorder,” or “Acr call recorder” as it is listed in the Apple App Store, used an insecurely designed web API to fetch call recordings from AWS S3 cloud storage. While at first glance, this vulnerability discussion might not seem as glamorous as a high-profile breach of a major brand, it offers a number of valuable learning lessons. The event also shares patterns with other high-profile incidents and breaches. Most of the issues map directly to the OWASP API Security Top 10, a list that captures the most common API mistakes. Our discussion focuses on steps you can take for better API security, and we also include some interesting mobile security and cloud security aspects.
Fortunately, the security researcher who discovered the issues, Anand Prakash of Pingsafe AI, and Zack Whittaker from TechCrunch, responsibly disclosed the weaknesses to the app creator prior to publicly disclosing their findings. It’s not clear if any malicious parties found these issues prior to the disclosure and accessed recordings by exploiting the weaknesses. The app developer and publisher, Arun Nair, has issued no public statements as of this writing.
Get the free customizable API Security Best Practices Checklist.
Download NowDeep dive into the Call Recorder security issues
Anand started his research by reverse engineering the iOS mobile binary, an IPA package. Techniques and tooling to do so are readily available on the Internet. OWASP maintains one such source at the page iOS Tampering and Reverse Engineering. A mobile binary is not a “black box,” and practitioners should not consider the mobile code to be protected by default. Attackers can readily reverse engineer mobile code to make sense of internal business logic of an application, harvest API keys, and steal intellectual property. Security researchers also use the same reverse engineering and debugging techniques to proactively find issues.
Another commonality between attackers and researchers is the use of intercepting proxy tools such as Charles Web Debugging Proxy, Telerik Fiddler, OWASP Zed Attack proxy, and Portswigger Burp Suite. These tools are incredibly useful for exposing application communications as well as manipulating them. Like many things in life, they can be used for good or for evil. The tool itself cannot create a problem, but people with malicious intent running it can.
The issues present in the Call Recorder application were:
- Broken authentication
- Broken object level authorization (BOLA)
- Lack of encrypted transport
- Excessive data exposure
- Lack of pseudonymous identifiers
- Unsecured cloud storage
Broken authentication
The security researcher trapped a request originating from the mobile app that fetches locations of call recordings. We can see in the POST request that the endpoint was not enforcing authentication, since the HTTP headers lack any type of authentication cookie, session cookie, bearer token, etc. The headers that were present that are highlighted below include only the basics like content-type, user agent, and encoding.
You can find further technical details on this type of issue on the Salt Security blog at API2:2019 Broken User Authentication.
Broken object level authorization
BOLA — the most common of API security risks — rears its head again here. The researcher was easily able to alter the value of “UserID” to be that of some other user of the application. The system returned the location of recordings for that user, failing to enforce authorization. This endpoint could be easily enumerated since this value was a standard phone number format.
You can find further technical details on this type of issue on the Salt Security blog at API1:2019 Broken Object Level Authorization.
Lack of Encrypted Transport
Requests to fetch recordings were not using encrypted transport. We would expect to see traffic flowing over 443 for HTTPS. In this case, the app used HTTP over port 80, as highlighted below. Use of HTTP rather than HTTPS makes interception and manipulation that much easier for attackers since traffic is sent in the clear, unencrypted.
Excessive data exposure
The HTTP response from the API endpoint contained the URL of recordings in an AWS S3 bucket in the “s3_key” value as highlighted below. An attacker could then use this information to pull the recording directly from AWS S3.
The rest of the JSON payload also included metadata about the caller and callee phone numbers, contained in the “caller_number” and “callee” values as highlighted below. Phone numbers are a form of PII and a regulated data type. Depending on the regulatory impact, these types of sensitive data must be pseudonymized, masked, tokenized, encrypted, or not used at all. Organizations must ensure controls are relevant to the application architecture and protect sensitive data in transit, in use, and at rest as appropriate.
You can find further technical details on this type of issue on the Salt Security blog at API3:2019 Excessive Data Exposure.
Lack of pseudonymous identifiers
“UserID” was directly equal to a given user’s phone number, which is a form of PII. Record and object identifiers should be unique, non-predictable, and non-sequential. It also shouldn’t be possible to infer the identity of an individual. These phone numbers could easily have been run through a reverse phone number lookup to go further in the attack chain and possibly social engineer or blackmail someone.
Unsecured cloud storage
S3 buckets may or may not have been locked down as well. This type of oversight is common, including cases with public git repos. Presumably the S3 buckets were not restricted since Zack Whittaker from TechCrunch was able to see that the bucket used by the app contained 130,000 recordings and more than 300 GB of data.
Top recommendations to avoid iPhone Call Recorder mistakes
The recommendations for addressing these security issues overlap significantly with a prior blog on a separate security incident, the Parler Data breach. The Salt Security API Protection Platform is built to detect and block attackers that aim to abuse these types of application vulnerabilities and weaknesses.
Specific to this event with the iPhone Call Recorder app, recommendations include:
- Secure your mobile application code: Always assume that your mobile code will be reverse engineered. Never store intellectual property or authentication material there. Attackers will often start their API reconnaissance and attacks by reverse engineering client-side code, whether it is web browser JavaScript or native mobile code. We see that the security researcher started his work here to understand the application logic and how it made use of AWS S3 for storing recordings.
- Implement both authentication and authorization: It may seem simple — and I may even sound like a broken record repeating it — but these two fundamentals must be accounted for. You must also consider all “links in the chain” and consider the system end to end. We saw here that authentication was not present in the API calls made by the mobile app, and access to AWS S3 buckets where call recordings were kept was also not controlled.
- Avoid predictable identifiers and sequential identifiers: Always use unique, non-predictable identifiers. This tactic helps reduce the risk of enumeration attacks. In this case, the identifiers used directly impact privacy since a phone number is a form of PII. New identifiers should’ve been created for users instead of using a phone number as the unique identifier.
- Protect data served by APIs, and protect APIs that serve user data: Always use encrypted transports for application traffic that exchange sensitive data, like phone numbers or other regulated data types. Return only the data necessary for the app to function, and always consider the sensitivity of the data you are transmitting. Assume that someone can intercept API traffic to “see” sensitive data you are transmitting, and don’t presume that obscuring or masking such data in the UI is sufficient. Network positioning of where that traffic intercept or “man-in-the-middle” occurs is largely irrelevant, since attackers can intercept the traffic from their own devices that they control. In this case, the API traffic exposed PII (originating and recipient phone numbers) as well as S3 locations where full recordings were stored.
Organizational implications and conclusion
Privacy impacts of such an app should be immediately evident. If you are recording conversations, someone’s privacy will always be eroded. In this case however, the app was found to be doing several things improperly, which led to exposure of call recordings and user data. This example also includes clear regulatory impacts with exposure of PII in the metadata in traffic.
I would implore all readers to think twice before downloading and using any app from the public app stores of Apple and Google. All bets are off if you choose to sideload mobile apps since integrity and authenticity of a mobile app binary is no longer guaranteed. And certainly question the prompts from your mobile device when an application requests permissions on-device. This event reminded me of flashlight apps that used to be all the rage before Apple and Google created such functionality directly in their respective mobile operating systems. These apps were notorious for being channels for numerous types of malware and attackers targeting mobile devices. Public mobile apps stores include a number of these call recording apps – it’s a safe bet that at least a few of them aren’t coded or architected with security and privacy in mind.
For established software vendors or organizations building applications, documented security processes and responsible disclosure policies are much more common. Unfortunately, in the case of the public mobile app stores and individual developers, such security processes and disclosure policies are often a rarity. Apple and Google may be able to assist in some cases, but you may ultimately be dealing with one individual wearing all hats. The barrier to entry for building and publishing mobile applications is low because mobile OS vendors want to grow their respective mobile app ecosystems and increase mobile device adoption.
Acknowledging this lack of oversight is not a knock against Apple or Google and their respective app stores. The vendors do what they can to scan apps being submitted for publication. It is important to keep in mind the scale of the problem that they face. The mobile OS vendors are dealing with thousands of app submissions daily, and reviewing all code thoroughly can be a losing proposition. Just like many organizations, they employ a mix of automated scanning with static and dynamic analysis tools and augment with manual review by in-house subject matter experts as appropriate. Google also expanded on this review process in late 2019 with the formation of the App Defense Alliance. Essentially, Google hedges its bets by also submitting apps to mobile security vendors for additional analysis and scanning prior to publishing. The mobile security vendors include ESET, Lookout, and Zimperium.
This case includes an important lesson about pre-deployment secure design reviews and security testing, or “shift-left” focus as it’s often referred to these days in DevSecOps strategies. We know these approaches and test tools can help knock out common bugs and vulnerabilities. Unfortunately, many scanners are limited to finding exploitable conditions rather than the wider spectrum of app problems including authorization flaws, privacy impacting issues, and business logic flaws. Efficacy is also further reduced when automated scanning within build pipelines since scans must often be streamlined in order to complete in a timely manner. Apple and Google face a balancing act with “good enough” security scanning that users of the respective mobile OS feel safe and satisfying developers who want to quickly publish their apps to the public app stores.
Mature organizations acknowledge that automatically scanning all code pre-deployment with full test coverage and expecting to catch all types of issues is unrealistic. These organizations will regularly augment with internal manual review, externally sourced application assessments, bug bounty programs, and more importantly, behavior analysis in runtime. It’s a similar ongoing battle for Apple and Google. We at Salt Security stress exactly this point for practitioners and customers. Yes, focus on shifting some of your security focus earlier into design and development phases, but don’t abandon runtime security. Behavioral analysis and runtime protections are likely your first and last line of defense for the wide range of attacks and privacy impacts your organization is facing.
To learn more about how Salt can help defend your organization from API risks, you can connect with a rep or schedule a personalized demo.
Sources:
https://www.pingsafe.ai/blog/how-we-could-have-listened-to-anyones-call-recordings
https://techcrunch.com/2021/03/09/iphone-thousands-calls-exposed/
https://gizmodo.com/security-flaw-in-iphone-app-could-have-let-anyone-liste-1846439834
https://www.cnbc.com/2019/06/21/how-apples-app-review-process-for-the-app-store-works.html
https://source.android.com/security/overview/reports
https://developer.apple.com/app-store/review/