Do I really need these security updates?

When you rely on digital services to run your business efficiently, it is very easy to have a tool built which fulfills a need, but then is never updated, with the general opinion being “if it isn’t broken, don’t fix it”. Whilst this mindset is largely true, there is one very large gaping hole in this approach which makes it completely fall apart, and that is security.

You may not need any new functionality, or for any changes to be made to the existing features, but you will always need security updates. The creators of exploits who are trying to hack into sites and codebases will never remain still, they will always innovate and find new approaches. As a result, security updates should never be optional, and if you think security isn’t that important, then we will cover the reasons below why we believe that you are wrong.

Layers of Security

Before we delve into the ramifications of an insecure system and what steps can be taken to avoid these vulnerabilities, it is important to first mention that there are a number of different layers of security which are needed. This isn’t a case of one size fits all, flick a switch, and as if by magic, you are now secure. Instead, there are multiple levels where security must be maintained and these all have their own variables and vulnerabilities which must be considered on a case by case basis.

The main layers which we are going to look at which I shall try my best to explain to you are as follows: server, runtime, package, application, codebase, user. We will explore each of these below, outlining the implications of what happens when these layers are compromised, and give examples of how we keep these elements secure.

If you can’t picture how these fit together, a good example to explain this would be to think of a house. Working our way up from the bottom, the server is the area of ground beneath that house, the runtime layer is the foundation which everything is built upon, the package is the floor itself, the application relates to the walls and ceilings, the codebase is the contents of that house, and the user is you.

Working our way back down through this example within that context, users can come and go, you can have an empty house, or an overfilled house, this should make sense when thinking about visits to your website, known as traffic. Content is populated by the user and arranged based on their needs, this directly impacts the codebase as it must match the content structures provided, it needs to fit into the room essentially. This is why the codebase can’t fully break out of the application walls, it also has to be within reason of what the package can support. You wouldn’t try to place 4 sofas in a room when you only have the floorspace to accommodate 2. The package (floor) can’t exist without the runtime layer (foundations) and a suitable server (ground / plot size).

Security isn't a single switch

it's six interconnected layers (server, runtime, package, application, codebase, user) that all need maintenance. Think of it like a house: each layer builds on the one below, and a weakness anywhere compromises everything above it.

Server

When we talk about the server layer, we are talking about the physical infrastructure behind the hosting environment. This could either be a dedicated server in a physical location, or a cloud based server package. Regardless of server location, it will always have an operating system installed and maybe even a hosting platform such as plesk, cpanel, etc. This software is independent from anything that we host on that server, and it periodically has security updates released for it which we can apply. If you have a brand new site but you are hosting on a really old server which hasn’t had any updates applied to its core operating system or hosting software, then the server itself is at risk.

If a bad actor targeted your server, they would go through a list of tested methods which others have used historically. Security updates exist to fix these vulnerabilities, if these have not been applied then a number of these methods which are shared online may work on your server. This could enable the hacker to crash your server completely, install their own software, download data, or even gain command line access so they can take full ownership of that server and lock everybody else out.

The setup we use here at A Digital is a cloud based Digital Ocean server running Linux Ubuntu. We don’t use a hosting software suite such as cPanel or Plesk because we have opted to go with a service called ServerPilot instead. This ensures that we have all major security updates available auto applied to our server setup, any new Ubuntu releases are then applied manually by our team. Old servers are also rotated out periodically and replaced by fresh new servers. This ensures that all of the discovered vulnerabilities are fixed as quickly as possible at very regular intervals.

Your server is the foundation

if its operating system and hosting software aren't updated, hackers can use known exploits to crash it, steal data, or lock you out completely. Auto-applying updates and rotating old servers keeps this foundation solid.

Runtime

The runtime layer is what we refer to when we talk about command line operations. These are often running silently in the background and they provide a crucial platform which the server needs to run code on your site. Examples of runtime layer components are php, apache / nginx, redis, etc. Commands run on top of these libraries and are often automated by the server to keep everything running smoothly, these are often referred to as services.

Each of these services can be configured and some have different versions available, such as php for example. If these services are configured poorly, such as a redis instance, then this can eat up a lot of the server resources. An example of this could be the CPU constantly spiking and running higher than it needs to, along with memory being taken up on the server also, this leads to a slower loading website and can even result in downtime where the server becomes unresponsive.

Bad actors will sometimes use this to force a site offline, often known as a DDOS attack, this increases the level of traffic to a site using bots to make the hardware become overloaded. If you have poorly optimised configurations within the runtime layer then these attacks will have a higher chance of succeeding. Older versions of php are also vulnerable to a number of attacks which can be used to exploit the system, which is why staying up to date is crucial, it also brings numerous speed benefits alongside the security improvements.

Here at A Digital we set php versions within ServerPilot on a per site basis, we then handle the configuration for that version as a global setting across the entire server. We also configure a number of other runtime components via the command line on our servers depending on the needs of each site / client and what services they connect up with. New versions are automatically installed onto our tech stack and then we implement them manually, checking everything works first on a dev environment, before applying the changes to live. This allows us to stay up to date but in a controlled manner where we can always roll back if needed. We constantly monitor our server resource usage and keep everything highly optimised so that it runs smoothly and avoids as much downtime as possible.

Runtime services (PHP, Redis, Apache/Nginx) are the background workers keeping your site alive

Old versions create vulnerabilities and performance issues that make DDOS attacks easier. Regular updates bring speed improvements alongside critical security patches.

Package

The package layer is similar to runtime, but they are more targeted towards our needs. Instead of PHP, we are talking about composer, or specific libraries within the PHP ecosystem. This still sits at a fairly high level as it is always third party software. These packages are widely available and sometimes vulnerabilities can exist in the packages which we have installed.

If we had an old version of Imagick installed on a server, we could be vulnerable to an attack targeting image transformers. An old version of a MsSql library could allow a middle man to intercept database connections and read the data or insert their own queries.

These packages are often patched very quickly though because of the importance of them in propping up the rest of the web infrastructure. Our configuration means that these patches and updates are applied nightly whenever they are available, this enables us to have complete trust in our systems.

Third-party packages are specialized tools your site depends on

Outdated packages (like Imagick or database libraries) create attack vectors for hackers to intercept data or inject malicious queries. Nightly updates plug these holes before they're exploited.

Application

The application layer is anything which we have chosen to install for a specific project. This still sits at a fairly high level as it is usually third party software such as composer, webpack, anything contained within the package.json file or composer.json file. This software is widely available and sometimes vulnerabilities can exist inside the items we have installed.

Not all of these have been selected by us because when you pull in a piece of software through composer or npm (node package manager), it can also often reference additional software libraries which it requires to run, these can then in turn also have their own references. Once you’ve gone a few levels deep you can quickly see how software can be pulled in which wasn’t part of the initial requirement.

Craft for example can be included in our composer.json file, Craft then pulls in a number of libraries it requires such as symfony, twig, yii, etc. This is just the nature of things and you become reliant on the software you’ve chosen being maintained by their creators and updating their own internal references they require to close any potential security holes.

If these updates aren’t run, you could have an insecure site where form submissions, image transforms, and various endpoints on urls allow bad actors to access the system, an RCE or remote code execution is the most serious of vulnerabilities as it allows hackers to put their own code in place on your server, this can result in hundreds of urls being picked up by google and customer data could be at risk. It is crucial that updates are run regularly on your site to prevent this from happening. Updates are released before the vulnerability is disclosed which gives you time to get a fix in place before the secure flaw becomes common knowledge. Once details are released, opportunists try their luck as a lazy attack where they are just trying to find sites which aren’t yet updated.

When new versions are released, our code editoring software picks this up and tells us, it is given additional highlighting if it is a security update. The CraftCMS platform also highlights any available updates and tells us when they are needed. We have to run these updates manually ourselves and test them before rolling them out to a live environment and this is what we do as part of our maintenance and support plans.

When you install Craft CMS or npm packages, you're also pulling in dozens of dependencies they rely on

Security updates are released before vulnerabilities become public knowledge - updating quickly means staying ahead of hackers who scan for unpatched sites.

Codebase

This layer is the part which we are most responsible for as a web agency here at A Digital. When we build your website, this is the part which is customised purely for you, built on top of the selected runtime, packages, and application.

To give an example of this, think of a contact form on your website, we may use a plugin from the application layer such as formie to help us facilitate this, but the frontend code which implements this is all written by us. We write the template tags, the html, the javascript and the css. Sometimes if a plugin doesn’t give us what we need or isn’t available for the feature we want then we will also write our own.

If our validation rules on the javascript aren’t in place correctly, we could have users submitting strange values within forms, this can lead to them attempting to run commands through these inputs, also known as sql injection. There are a number of safeguards in place to prevent this within the endpoints but we can’t always rely on them, so we need to ensure our frontend code is also robust and prevents these attempts.

We also have a responsibility to ensure our codebase is optimised to deal with high traffic. If a page loads slowly and is accessed multiple times at once, this can place additional strain onto the server, whereas a fully optimised page will reduce the load required.

Sometimes when we update the application layer, this requires us to make codebase layer changes to ensure it is all correctly connected still. Testing after updating is important to catch these bugs before one of your site visitors is affected by them. We always believe in prevention over cure when approaching these scenarios. It is impossible to catch every bug before it is encountered, but we always try to resolve as many as possible through our own tests before we roll out updates to live sites.

Your custom code is uniquely yours and uniquely your responsibility

Poor validation opens doors to SQL injection, while unoptimized code creates server strain under traffic. Test thoroughly after updates - prevention beats fixing problems after customers find them.

User

The user layer is the final layer of security we need to consider. This one usually sits squarely with the end user, the visitor to your website. The main component of user security comes down to their browser they are using to access your website.

You can probably appreciate that the latest version for a browser is going to be much more secure for a user than a version which was released 4 years ago, or a browser which hasn’t had any new updates published in a long time.

This used to be quite a prominent issue in the past when users were still using very old versions of internet explorer and had not upgraded to edge, or moved over to a competitor browser. This has been much less of an issue in more recent years, but with the advent of AI enabled browsers it is still very important to be aware of browser security.

On top of browsers themselves, users often have their own collection of browser extensions installed. These can range from ad blockers, to discount finders, to ai companions. Some of these can have adverse effects on your analytics data, others can slow down the site for the user or change the layout. These issues will only be for that individual visitor though and can be very difficult to replicate.

When a user has compromised security, the browser or extension may be recording their inputs and sending them through to a bad actor. The most we can do is to ensure SSL certificates are always installed and up to date with the latest encryption techniques. The rest is down to the user. It is worth explaining this layer of security though because it shows that sometimes it isn’t the site at fault when security can be compromised for a visitor.

Users' outdated browsers and sketchy extensions can compromise their own security, recording inputs and sending data to bad actors

Your job is keeping SSL certificates current with latest encryption - the rest is on them.

Backups

When talking about anything security related, backups are crucial! It is important to have a robust backup framework in place for any disaster recovery scenarios. Imagine a situation where the site has gone down, the database is compromised, tables have been removed, fake data has been bulk added, and then the server has gone down and we can’t bring it back up.

The above scenario would mean that a backup is required to get the site up and running again, but also this backup needs to be from before the data changes took effect. Here at A Digital we run a daily backup overnight for both the files and the databases. On eCommerce sites this is instead run hourly. Our backups are all held within a private S3 bucket which we can access at any time from any location. These are rotation rules in place also so we only keep a maximum number of backups for each server.

This configuration allows us to react to situations quickly with confidence. Our first objective is to fix the issue on the site, if we determine it requires a restore point to be used though then we will access our backups. Having this fallback option ensures that we are never in a position where everything is lost. Thankfully it is extremely rare for us to need these, but it is essential that we have this system in place. It is much better to have it and not need it than to be caught out on the day when it eventually does happen.

If you host your site with us then we guarantee that your data is safe. If you host elsewhere and aren’t sure of the backup policy in place, please ask your hosting provider and make sure the necessary steps have been followed.

Backups are your disaster recovery insurance

Daily backups (hourly for eCommerce) stored securely off-site mean you can restore from before any breach or corruption happened. It's better to have it and never need it than face catastrophic data loss.

Archiving / Removing Old Data

When your website accepts orders, user account signups, or even contact form submissions, it will also be gathering data which is saved from that event into the database. This data is held forever unless you have an archiving / removal policy in place.

If you’re unsure about the usefulness of having one of these policies in place, the question I would ask to you is this - Do you really need to be keeping records from a form submission placed back in 2014? If the answer is no, which it should be, then the next step is just deciding on the timeframe your archiving and removal policies should cover.

For the timeframe, you don’t want to be too strict where you could be at risk of removing relevant data before its usefulness has ended. A good starting point for a strategy would be to remove anything older than 2 years, and to obfuscate anything older than 1 year, this can then be tweaked around specific content areas based on your needs.

You may wish to maintain all address data for active user accounts without any removal, this is a fair action to take as it will mean that returning customers who are placing orders won’t have to re-enter their details.

There is always a balance between usability, convenience, and security. The most secure system imaginable would remove all data as soon as it has finished processing an order, but this becomes very frustrating for returning customers as they wouldn’t have any order history, saved addresses, or maybe even an account, meaning that they would essentially start from scratch every single time they visited the site. This creates a bad experience and reflects a poor repeat customer journey. There are always trade-offs which must be considered with each approach.

If it causes issues with convenience, why remove data at all? Well we need to ensure that we aren’t keeping old data unnecessarily. If there was ever a breach and someone gained access to the database, we need to ensure that we have minimised the amount of people affected to reduce the impact of such a breach. These breaches are rare but that shouldn’t stop you from taking precautions.

There is also a benefit to removing old data, your database will perform faster. Think of it like this, imagine that you need to find a pen in your desk, you keep all of your pens in a specific drawer, but you’ve never had a clear out to remove all the pens which no longer work. Finding a working pen will take longer than if you had done a clear out and are confident you only have pens which work. Now apply this scenario to a database, all of the data held in there which is no longer relevant could potentially be slowing down your queries and impacting the performance of your website.

Old data you don't need creates two problems:

bigger impact if breached, and slower database performance. A sensible policy (like removing 2+ year old submissions, obfuscating 1+ year data) minimizes risk while keeping what matters for returning customers.

Obfuscating Data

If you still couldn’t bring yourself to delete areas of the data because you needed it for various reporting tools, that’s ok, there is another option available to us called obfuscation. Obfuscation is when instead of deleting data, we make areas of it impossible to read.

This can turn an email address into something like ma***@***.co*, this means the email address can no longer be used if the data is accessed from a breach. The number of asterisks would always be the same regardless of email length, so the original could have been 30 characters long but the obfuscated version is always 13 characters in length, and only the first 2 characters are correct.

The reason we might want to do this is so reports can still be generated based on the quantity of products sold, number of orders placed in a given timeframe, and many other reasons. This all means that we can’t remove the order history data because those reports would then all be empty as a result. Instead we query the relevant data, and anything which isn’t being reported on and is deemed to be personally identifiable, we ensure that we obfuscate it to guarantee that it is being kept securely.

By obfuscating data like this, we are essentially deleting it but maintaining the rest of the dataset required for reporting purposes, this makes it non identifiable and minimizes any risks attached if there were a potential breach. We don’t need to see a customer’s name in 5 years time, we just need to know if the popularity increased for that purchased product when a sale was being run.

Can't delete data you need for reports? Obfuscate it instead

Turn matt@example.com into ma***@***.co* - reports still work, but breached data becomes useless to hackers. It's the best of both worlds: security and functionality.

Data Protection Laws

What does the law say about protecting people's data? You may remember a lot of headlines about something called GDPR in recent years, this is a law which is being enforced across Europe but it can also have impacts on global companies if their customers are inside Europe.

GDPR states that a breach must be announced to authorities within 72 hours of being detected. A GDPR breach is a security incident where personal data is accidentally or unlawfully destroyed, lost, altered, disclosed, or accessed without authorization, compromising its confidentiality, integrity, or availability. This includes things like emailing sensitive files to the wrong person, losing a device with unencrypted data, or falling victim to phishing attacks that expose login details. Organizations are required to report high-risk breaches to the supervisory authority (like the ICO in the UK) within 72 hours and also report it to any affected individuals.

Here is a tool called Enforcement Tracker which allows you to see GDPR fines which have been issued. The earliest recorded fine is 2018 and there are 2,976 fines in total at the time of writing this in December 2025. When filtering the country column down to just the UK, Enforcement Tracker shows that 2 fines were issued in 2024, which then increased to 8 fines issued in 25. This is not an unenforceable law, and fines are being issued.

The fines range in value but it isn’t just large companies who are being fined, smaller companies are also included, even down to individuals such as police officers in some cases. This means that the cost of inaction around security will eventually catch up with you, not only does it impact customer confidence and brand perception, but it also has a monetary cost through loss of sales and GDPR fines being issued.

GDPR isn't theoretical

nearly 3,000 fines issued since 2018, hitting small businesses and individuals, not just giants. Breaches must be reported within 72 hours. The cost of ignoring security (fines + lost trust + lost sales) always exceeds the cost of maintaining it.

Taking Action

We've explored various security vulnerabilities and consequences, but you might now be thinking to yourself, what can you do about it? Don't worry, we've got a number of suggestion which should help you.

First of all we would encourage you to carry out a quick security audit. Ask the following questions to gain a better understanding of your current position:

  • Are my backups working?
  • How frequently do our backups run?
  • Where are those backups being held?
  • When was our last server update?
  • When was our last cms update?
  • How often are these updates being carried out?
  • Do we have a data retention policy?

If you ask all of the above questions (you may also think of some others too), then you should have a pretty good understanding of your currently security and any gaps.

Next we would like you to compare your current maintenance plans with what we offer our clients. You can also bring the information from your security audit and schedule a free security review call with us.

Conclusion

From the server foundation through runtime services, packages, applications, your custom codebase, and finally the user's browser - each layer requires active maintenance. Neglect any single layer and you've created a vulnerability that bad actors can exploit.

On reflection, consider if you can really afford not to take security seriously. Investing in security always pays off in the long run. To answer our original question: yes, you really do need those security updates.