Passwords - Evolution of authentication
Last updated
Last updated
The history of authentication begins with passwords in the 1960s, when the first computers were made available to the general public.
Back then, computers were huge, ridiculously expensive, and slow - by today's standards. The fact that only a handful of universities and large companies had a real computer made demand exceptionally high. To meet this demand, universities such as MIT introduced time-sharing operating systems such as the Compatible Time-Sharing System (CTSS), which allowed multiple users to share resources provided by a single computer.
By solving the demand problem, file privacy within a shared file system has become a challenge. It is important to note that the users were researchers and students who used the computer to calculate and store their research and new discoveries - no one worried about losing their "privacy" file.
In fact, everyone had access to everything. The solution appeared in 1961 thanks to Fernando Corbató, researcher and then professor at MIT.
Corbató implemented a simple, that is to say very rudimentary, password program:
Passwords were stored in a plain text file in the file system.
It didn't take long for someone to crack the code. Allan Scherr, in an attempt to extend his limited four-hour session of using the MIT computer, simply found the location of the file and printed out the complete list of users' passwords.
The conclusion from the 1960s was clear:
Storing passwords in a plaintext file is a bad idea.
Mistakes made in the past have caused development to add another member to the authentication story. Robert Morrison took up a concept from cryptography: the hash function.
Cryptography:
In general, cryptography is a writing technique where an encrypted message is written using secret codes or encryption keys. Cryptography is mainly used to protect a message considered confidential. This method is used in a large number of fields, such as defense, information technology, privacy, etc. There are many cryptographic algorithms that can be used to encrypt (and decrypt for the recipient) the message. Definition: ORACLE
Hash function:
We call a hash function a particular function which, from data provided as input, calculates a fingerprint used to quickly identify, although incompletely, the initial data. Hash functions are used in computing and cryptography.
Definition: Techno-Science
The use of a hashing function makes it possible not to store passwords in clear text in the database but only to store a fingerprint of them. It is important to use a public algorithm known to be strong in order to calculate the said fingerprints. To date, MD5 is no longer one of the algorithms known to be strong. Definition: CNIL
When the hash function became a standard, hackers were able to circumvent it. Additional protection was introduced in the form of "SALT" random elements which made the initial hash even more unique and difficult to decrypt. Storing passwords by hashing was a real breakthrough at the time.
Salting (SALT): Salting is a method for strengthening the security of information that is intended to be hashed (for example passwords) by adding additional data in order to prevent two identical pieces of information from leading to the same hash (the result of 'a hash function). The purpose of salting is to combat frequency analysis attacks, attacks using rainbow tables, dictionary attacks and brute force attacks. For these last two attacks, salting is effective when the salt used is not known to the attacker or when the attack targets a large number of hashed data all salted differently. Definition: WIKIPEDIA
Besides hashing, other cryptography techniques have proven beneficial for the development of authentication. Asymmetric cryptography, better known as public key infrastructure, is one such technique. Without going into the details of how it works, this technology is based on two keys. A public key that one can securely share with the rest of the world to prove their identity, and a private key for the digital signature used to verify their identity. A digital certificate is actually a combination of the two - a public key certificate signed with your private key to verify that it's yours.
However, widespread use of public key infrastructure did not occur until the early 1990s. The reason is that this technology was highly confidential and reserved for government institutions. However, widespread use of Public key infrastructure did not occur until the early 90s. The reason is that this technology was highly confidential and reserved for government institutions.
As security measures become stronger, hackers are on the rise. The security industry therefore needs to constantly up its game and look for more secure authentication solutions. Static passwords were no longer reliable. Hackers could easily steal, intercept or guess your password and replay it as many times as they wanted. It was in this context that an idea was born. What if a user had a completely different password every time they wanted to access a service? It was at this time that one-time passwords were born.
When developing the concept of single password authentication, two main challenges must be addressed:
How to randomly generate a new unpredictable password that the system recognizes as correct? How to transmit these dynamic passwords to end users? After finding the answers to the aforementioned challenges, the final product was a time-based OTP, delivered through special hardware. Over the years, OTP standards have evolved from time-based, challenge/response, hash-based methods to event-based methods. The delivery of OTPs has finally abandoned the need for a specific hardware device - one-time passwords are delivered via SMS, email or specialized mobile applications.
The 1970s secret around public key infrastructure was finally exposed during the 1990s. The driving force behind the need for PKI (Public Key Infrastructure) was the advent of the World Wide Web. With a wealth of confidential data online, identifying service users has become crucial.
As mentioned earlier, PKI integrates digital certificates as well as private and public key pairs. This led to the development of the SP4 protocol, which was later renamed to the more widely recognized Transport Layer Security (TLS) protocol. A few years later, Secure Sockets Layer (SSL) was introduced, including server authentication and key components.
Over time, the core functions of PKI technology have evolved to include the creation, storage, and distribution of digital certificates. These functions also include:
CA (Certification Authority): responsible for issuing and signing digital certificates
RA (Registration Authority): responsible for verifying the identity of users submitting requests for digital certificates
Central directory: intended for storing keys
CMS (Certificate Management System): managing operational activities (access to stored certificates, etc.)
Certification Policy: a statement describing PKI requirements
Multi-factor authentication (MFA) requires a combination of several authentication factors to verify a person's identity. With the OTPs of the 80s, we started to touch on this subject. The 2000s eliminated the need for a specific hardware device to generate dynamic passwords. Emerging technologies and digitalization have brought specialized mobile applications for MFA and user OTP generation.
The three main authentication elements needed for successful identity verification are:
Knowledge – something the user knows, such as a PIN or password.
Possession – something the user owns; a digital certificate, smart card, mobile device, etc.
Inherence - something that directly verifies that it is the user, i.e. biometric information (e.g. fingerprint, face or voice recognition).
Successful authentication involves the combination of at least two security elements. If an attacker obtains a password, they should also have access to the second or even third factor used to authenticate.
Single sign-on (SSO) represented a significant advancement in authentication. The basic principle of SSO is that users cannot be trusted with passwords. We tend to underestimate the importance of passwords, reusing them repeatedly and sometimes even sharing them with friends, family members or colleagues. It's no wonder we forget our passwords since we need a different one for each service we use. To solve these and other problems, the concept of a trusted third party responsible for identity verification was born.
SSO relies on a trusted third party to authenticate users, eliminating the need to verify credentials on each website or service. When a user logs in to a website, the site checks to see if an SSO provider has already authenticated that user. If the verification is successful, the user accesses the site without needing to log in again; otherwise, the user must log in to the website. Although convenient, SSO also poses some risks. For example, if a user's Gmail account is compromised, any websites or services they used Gmail to sign in to are also at risk.
Biometric authentication uses physical characteristics to determine the user's identity. This includes fingerprint scanning, facial recognition, voice recognition, iris scanning, etc. Basically, biometrics cover the “something you are” element of authentication in the strong authentication requirement.
To everyone's surprise, iPhones were not the first to integrate biometrics into their offering. The very first smartphone that enabled fingerprint authentication was the Android-powered Motorola ATRIX. Apple followed the same path three years later by replacing the convenient fingerprint feature with Face ID in 2017. 30,000 infrared dots scan the user's face and determine their identity.
Today, biometrics are part of our daily routine – unlocking our smartphones, confirming online purchases, etc. They are extremely difficult to forge and are considered one of the most secure authentication methods available. However, debates over privacy concerns continue to raise legal questions.