Security by obscurity
Security by obscurity is a defensive strategy that relies on keeping a system's inner workings secret rather than hard-to-break cryptography or robust architecture. The idea is that if an attacker doesn't know how the system works, they won't be able to compromise it.
This approach is best for backend systems like APIs where the logic is largely unknown or undocumented.
How it works
Proprietary software often obfuscates code, making reverse-engineering difficult. Companies can also claim to use "proprietary" encryption or hashing functions that are not publicly documented.
The problem in software
Without external scrutiny, bugs or weaknesses go unnoticed until an attacker finds them. Relying on secrecy can also lead developers to skip essential security best practices (e.g., proper input validation, secure coding guidelines).
A system should always be secure by design instead of relying on security by obscurity. That means building strong cryptographic primitives, performing threat modeling early, and keeping the code transparent.
Examples
Apple
Apple devices (iOS and macOS) are a prime example of systems that rely heavily on security by obscurity. Almost everything they make is proprietary, and Apple treats source code as a secret.
While Apple claims to use strong cryptography and asserts that system images are audited by third parties, these are claims that aren't - and can't be - independently proven. Why should we blindly trust such assurances?
This "black box" approach has several consequences. iOS is tightly controlled, which limits users' ability to inspect the code or modify the system. Essentially, you're trusting Apple to have implemented everything correctly.
Apple also actively makes it difficult to reverse engineer their software. This can deter a few attackers, but it also hinders every independent security researchers who might want to find and report vulnerabilities.
Your security depends on Apple's competence and willingness to address security issues. If there's a flaw in their proprietary code, it's up to them to fix it, and you're reliant on them doing so promptly.
This approach also made it possible for the US government to access user data through programs like PRISM, as revealed by Edward Snowden. Because Apple's operating systems were (and are still) completely locked down, there was no way for the users to know that backdoors could be in place. The very secrecy that Apple touted as a security feature was, in fact, what enabled this clandestine data collection. No independent researcher could have discovered this access because the code was locked away, reinforcing the inherent risk of relying on trusting a company whose interests aren't yours.
But Apple's primary goals aren't security or privacy; they're selling devices and maintaining a positive reputation. While they market security or privacy as part of every product they make, it's largely unprovable and likely lower on their list of priorities than profit and brand image.