Software is not like wine and cheese, it does not get better with age, on the contrary, security strength decreases over time because of software obsolescence. Data security has always been important. But since more people are working remotely as a result of the current health crisis, there are more opportunities for unauthorized access to […]
Software is not like wine and cheese, it does not get better with age, on the contrary, security strength decreases over time because of software obsolescence. Data security has always been important. But since more people are working remotely as a result of the current health crisis, there are more opportunities for unauthorized access to your data than ever before.
Security is a group effort since the weakest link is the point of entry. According to a study conducted by IBM and The Ponemon Institute, the two root causes of data breaches in 2020 were compromised credentials (most often due to weak passwords) and cloud misconfigurations (leaving sensitive data accessible ). According to Gartner, In 2021, exposed APIs will pose large threats than UI in 90% of web-enabled applications. Organizations spend time and effort securing the information on the front end, but the attackers claw their way into the system anyway. Businesses need to set up another check on the way out of the network. In other words, if you miss a thief on the way in, you still can catch him on the way out. If the attacker accesses confidential information, it has value only if they can transfer it to their systems.
Database security is a complex process that involves all aspects of information security technologies and practices. It’s also usually at odds with database usability. The more accessible and easier it is to use the database, the more vulnerable it is; the more invulnerable the database is to threats, the more difficult it is to access and use. This paradox is called Anderson’s Rule.
Let us take a look at how data security evolved over the decades. There are a few good stories in there you will enjoy reading.
Access to the giant electronic machines was limited to a small group of people and they weren’t networked. Only a few people knew how to work them so there was no imminent threat. The theory regarding computer viruses was first known in 1949 when computer pioneer John Von Neumann said that computer programs could reproduce
The roots of hacking are as much related to telephones as they are to computers. In the late 1950s, ‘phone phreaking’ was predominant. The term encapsulates several methods that ‘phreaks’ (people with an interest in the workings of telephones) used to override the protocols that allowed telecom engineers to work on the network remotely to make free calls.
Most computers in the early 1960s were still huge mainframes, put away in secure temperature-controlled rooms. These were very costly, so accessibility – even to admins – was limited. Back then, the attacks had no commercial or geopolitical purposes. Most hackers were curious people or someone who wanted to improve existing systems.
Cybersecurity actually began in 1972 with a project on ARPANET (The Advanced Research Projects Agency Network), a prequel to the internet. Researcher Bob Thomas came up with a computer program called “Creeper” that could travel within ARPANET’s network, leaving breadcrumbs wherever it went. The breadcrumb left a message saying: ‘I’m the creeper, catch me if you can’. Ray Tomlinson (the inventor of email ) wrote another program called Reaper. It chased and deleted Creeper. Reaper was the first antivirus software, it was also the first duplicating program, making it the first-ever computer worm.
The 1980s saw an increase in high-profile attacks, like those at National CSS, AT&T, and Los Alamos National Laboratory. The terms Trojan Horse and Computer Virus were first used in 1980 s as well. Cybersecurity started to be taken more seriously. Tech users quickly learned to monitor the file size, having learned that an increase in the size of the file was the first sign of potential virus infection. Cybersecurity policies incorporated this, and a reduction in free operating memory remains a sign of attack to this day. Early antivirus software incorporated simple scanners that performed context searches to detect virus code sequences. Most of the scanners also included “immunizers” that made viruses think the computer was already infected and not attack them ( Similar to our vaccines).
New viruses and malware increased in the 1990s, from tens of thousands to around 5 million every year by 2007. In the mid-‘90s, it was clear that cybersecurity had to be mass-influenced to protect the public. One NASA researcher developed the first firewall program, basing it on the structures that prevent the spread of actual fires in buildings. By the end of the 1990s, email was booming and while it promised to revolutionize communication, it also opened up a new entry point for viruses.
With the Internet being a household thing in the early 2000, cyber-criminals had more vulnerabilities to exploit than ever before. As more and more data was being stored digitally, there was more to hack._ In 2001, a new infection technique surfaced: people no longer needed to download – visiting an infected website was enough. Viruses infected the clean pages or ‘hid’ malware on legitimate web pages. Messaging services were also targeted, and worms were designed to propagate via IRC (Internet Chat Relay) channel. The development of zero-day attacks, which make use of gaps in security software and applications, meant that antivirus was less effective.
Cybersecurity tailored specifically to the needs of businesses became more prevalent in 2011. As cybersecurity developed to handle a wide range of attack types, attackers started with their own innovations: multi-vector attacks and Social engineering. Attackers were smarter and antivirus was forced to move from signature-based methods of detection to next-gen innovations.
Security is something that should be included in all stages of software engineering including architecture. Let us first understand how the back-end functions. Applications or front-end will never have access to the database directly. There is usually a master-slave approach to the architecture where there is an app server in between where the data is scrubbed (for protecting any personal data or PII) before sending it to the front-end.
So it is best to distribute security handling since there is no one solution for this. Most applications are framed so that people who are responsible for data management (application admins) are not given access to the underlying database. And people who have data access (Data scientists, Info-sec personnel, etc) are not included in the business end of the operations. The primary reason for this is for auditing. People who change data can do so only through the front end. The front end leaves an audit trail of actions taken. Having an audit trail keeps the application admins accountable. Also, you can prevent the app admins from looking at things they shouldn't be looking at. Companies also prefer to keep their Architecture secret, since one of the ways to discover a vulnerability in a system is to understand what the underlying architecture is.
We will now go through some common threats to data security in current times and how you can mitigate them
Even a small error can allow the attackers to hijack the database systems that can cost up to millions. To prevent such consequences, organizations should always imbibe the “everything will be broken” threat model to secure databases and prevent valuable information from getting compromised. I have listed a few of the basic security measures you can take for your organization to keep your database safe
Keep both the servers (application and database) on separate machines. A hosting server for the application can be utilized, but for storing customers’ valuable data, choose a separate database server with security features like multifactor authentication and proper access permissions. Hosting applications and databases on the same machine make it easier for the attackers to break into the system and hack into the administrator account.
Once the database is set up, it is important to ensure that is fully protected by a firewall that is capable of filtering any outbound connections and any requests which are meant to access information. The database server should also be protected from any malicious files by installing anti-malware and anti-ransomware software
Encryption consists of protecting the data with a private key on the application server or the database server. So, even if attackers have access to the database, they cannot decrypt the data easily. Encryption of data in transit is also implemented, where the data is encrypted before it's transferred over the network from the application server to the database server and vice-versa.
Organizations should ensure the least number of users who can access the database (Usually Data scientist or Infosec personnel). There should be proper authentication (2FA, MFA, etc) process implemented for the users. Database credentials should be stored in a hashed format so they are unreadable. Activity logs should be updated regularly to monitor all the activities regarding queries and requests
All the third-party software, APIs, and plugins must be updated to their latest versions. These systems should be updated regularly or whenever the new patches are released. This ensures that the latest versions are capable of immunizing the system with newly discovered cyber threats.
Backend data protection is very important. It is critical for your sensitive data especially with new data protection policies in place all over the world. Using the best security practices, we can stop the most anticipated risks and start a foundation for really solid security for your product.