Policy details
Prepared for: NHS Norfolk and Suffolk ICB
Status: Under review
Version: 1.4
Date: 1/04/2026
Document control details:
- 4/06/2021 – Version 0.1, Initial draft for review
- 30/06/2021 – Version 0.2, Updated diagrams included
- 27/06/2025 – Version 1, Approved
- 25/04/2025 – Version 1.1, Updated to reflect increased use of Azure tools and functionality
- 24/06/2025 – Version 1.2, Approved
- 13/06/2025 – Version 1.3, Additional details added on Incident Management, User Access and Security Reviews and Software
- 13/08/2025 – Version 1.4, Addition of reference to hardware authentication token use
- 1/04/2026 – Current version, Revised to apply to Norfolk and Suffolk ICB, instead of Norfolk and Waveney ICB
Introduction
Purpose
This document sets out the principles, approach and controls for the Norfolk and Suffolk ICB Azure Data Hub environment from a data and infrastructure security perspective and explains how it secures and protects the information for the ICS’ patients and staff from malicious activity during its system life cycle.
This document is intended to cover the data flows into the Data Hub and any subsequent processing and storage of that data within it together with any outbound flows of data. Full details of data fields, access to and any use of Data Hub data will be detailed within each use case as part of the governance process.
This document’s intended audience is all organisations within the Norfolk and Suffolk Integrated Care System.
Architecture
Principles
The architecture has been designed to be as flexible as possible whilst still ensuring tight controls over data governance and security. It’s design foundation is a recognised NHS Azure blueprint but has been enhanced to adopt a Zero Trust Architecture (ZTA) approach.
ZTA is a security framework that operates on the principle of “never trust, always verify.” Unlike traditional perimeter-based security models that assume entities inside the network are trustworthy, ZTA treats all users, devices, applications, and network traffic as untrusted by default – regardless of their location.
The Zero Trust model aligns with modern security needs by providing more granular control, improving visibility, and enhancing the ability to detect and respond to threats.
Alongside ZTA the environment must adopt defence-in-depth principles including perimeter control, layered NSGs, subnet isolation, private endpoints.
All resources should be configured with private endpoints.
Environments
The Data Hub comprises of four separate environments:
Three relate to functional areas that contain and control data:
- Development (DEV)
- User Acceptance Testing (UAT)
- Production (PROD)
No data will flow between these environments in any form.
The fourth contains common functions (such as log storage, monitoring, firewall):
- Hub (HUB)
All Azure resources will be based in one of the following UK Azure datacentres:
- UK South (London)
- UK West (Cardiff)
The development environment will only house test data however in a continuous integration environment where there is a large element of discovery it may be necessary to use live data to validate developed processes.
In these circumstances it must be:
- agreed by the Data Hub Governance group in advance.
- solely for the purpose of validating and refining development (and not be used for the any initial development).
- segregated from all test data and strictly controlled with appropriate RBAC controls.
- removed as soon the development has been validated.
Specialist Cloud Support
To support and supplement the inhouse team managing the Data Hub there is a Managed Service Contract for the Data Hub providing several services (see Management section for Details).
Within that provision there is a route to call on specialist cloud advice and services to ensure the any architectural changes or additions follow best practice and approach.
Development
Approach
Development of both the environment and the applications within it will follow best industry practice approaches at all times and that will include utilising Infrastructure as Code (IaC), defensive development approaches, and use of code repositories for all code.
Development within this environment will only be carried out by those with the necessary understanding of the above approaches including defensive code development, and ZTA and thus the potential risks to the system being built.
Infrastructure as Code (IaC)
Infrastructure as Code (IaC) is the practice of managing and provisioning computing infrastructure through machine-readable configuration files rather than manual processes. Utilising infrastructure in code will ensure a high level of consistency, repeatability, and improved governance across all environments.
All IaC artifacts must be treated as sensitive code assets and should follow secure coding practices, undergo peer reviews, and be subject to automated security scanning and compliance validation.
Secure Application Development
Development within this environment should only be carried out by those with the necessary understanding of defensive code development and the risks to the system being built.
Code will be developed in line with good practice, so it can be extended and maintained effectively. It should be clean and well documented which in turn will make it easier to secure. Any third-party code libraries or other code dependencies should not be considered any differently to the rest of the application/system being developed.
In particular, development should comply with OWASP recommendations.
Exception handling will ensure that error messages returned to internal or external systems or users do not include sensitive information that may be useful to an attacker.
DevOps Code Repository Security
All code repositories used in the software development lifecycle, including those for application code, infrastructure as code (IaC), and configuration management, must adhere to the following security controls and best practices:
- Access Control and Authentication – repositories will utilise Multi-Factor Authentication (MFA).
- Role-based access control (RBAC) – will be enforced to ensure users only have the minimum level of access required for their responsibilities. Access will be revoked immediately upon employee departure or role change.
- Secure Development Practices – Secrets, credentials, and API keys must never be stored in plaintext within code repositories.
- Peer Reviews – All code will undergo peer review prior to merging into production branches.
- Version Control – All changes will be tracked via version control, with clear commit messages and traceability to related issues or tickets.
- CAB – All production changes will be submitted to the Norfolk and Suffolk Datahub Change Board.
- Third-Party Dependencies – All third-party code and dependencies must be evaluated and approved before use in any environment. Usage of unverified or deprecated packages is prohibited without explicit security review.
Encryption
Minimum Encryption Level
The minimum encryption algorithm key strength will be AES 256bit.
Data in Transit
Data will be encrypted and protected from interception.
As a minimum all data flows to or from this environment will be transmitted either using SFTP or TLS (1.2 min).
Data at Rest
Azure data will be stored in an encrypted form whilst at rest either:
- within blob storage using Azure Storage Service Encryption (SSE) which automatically encrypts data before it is stored, and automatically decrypts the data when you retrieve it. The process is transparent to users, and it uses 256-bit Advanced Encryption Standard (AES) encryption and with Microsoft key management.
- within Databases:
- Microsoft SQL Server and Azure Synapse use Transparent data encryption (TDE) to encrypt stored data. TDE does real-time I/O encryption and decryption of data and log files using a database encryption key (DEK) and AES encryption algorithms.
- PostgreSQL uses Azure storage encryption with service managed keys. This provides transparent encryption and decryption using 256-bit AES encryption.
Virtual Servers
All Windows and Linux VMs will have encrypted OS and data drives. As improvements are made to the security functionality in Azure, services will be migrated over to ensure the best protection is utilised.
Encryption will either be Azure Disk Encryption (ADE) or Encryption at Host. Following Microsoft recommendations ADE will be phased out as part of a rolling programme of improvements.
ADE:
- Uses the BitLocker feature of Windows to provide volume encryption for the OS and data disks and is integrated with Azure Key Vault to control and manage the disk encryption keys and secrets.
- Linux will be encrypted using the DM-Crypt feature to provide volume encryption for the OS and data disks and is integrated with Azure Key Vault to control and manage the disk encryption keys and secrets.
Encryption at Host:
- Encrypts all data at rest and flows to/from the underlying storage service, where it’s persisted. This method of encryption ensures data encryption from end-to-end and includes temporary disk and OS/data disk caches. The same approach can be utilised regardless of operating system. Platform-managed keys will be used in all instances.
Data Management
Process Separation
The architecture for the transfer of inbound data streams has been created so that that the accounts used to access this to transfer data ‘in’ are only able to access a specific temporary staging area and have no other access to the Azure environment or staging areas. Subsequent internal Azure initiated processes (Integration Engine and Azure Pipelines etc) will be the only services that can retrieve information from these temporary staging areas.
Data Partitioning
In addition to segregation of network and infrastructure access, data will also be separated. Personally Identifiable Information (PII) data will be separated from the clinical data and stored independently.
Unstructured data will be separated and access controlled using RBAC. For organisation data this will be separated at the top level and all hierarchical permissions will be controlled by RBAC groups.
For Databases segregation will be controlled by the use of organisational and use case schema levels to ensure the appropriate segregation controls to the disparate data.
Data Flows
There are a number of routes for data to transfer to or from the Data Hub:
- Direct connections via the Microsoft Self Hosted Integration Runtime (SSL).
- Batched data over sFTP (SSH).
- Realtime HL7 message feeds over TCP Ports (SSL).
- Batch transfers over HTTPS (SSL).
- Azure Storage to Azure storage transfers (SSL).
These either utilise SSL or SSH and primarily these are over the HSCN network. Whilst the NHS and the Government’s approach is ‘Cloud First’ and ‘Public Internet First’, it is considered more secure to route inbound data flows over HSCN into the cloud environment.
The exception is a Secure Public facing sFTP server that will allow secure data transfers from non-NHS organisations.
Identifiable Data
As data arrives and is processed it will pass through a series of steps.
- Separation of the personal and identifiable data.
- The identifiers will be passed to the Master Patient Index (MPI) to retrieve the ICS’s global patient identifier.
- The personal and identifiable data will be stored securely and separately with this global patient identifier and the remainder of the information (without the PII) will be stored separately with the linking global patient identifier to link the two.
- In addition, specific use cases will have the data protected by additional processing as agreed in the use case documentation. It is expected that this will include pseudonymisation and anonymisation using agreed processes for individual fields as specified by the approved use case governance documentation. The technical processes employed to achieve this will include symmetric encryption, replacement, redaction, masking, date shifting/date ranges as appropriate for the type of data and size of the dataset etc.
Network Security
Inbound Connectivity
The Data Hub environment is connected to both the HSCN network and the public internet. Two HSCN lines connect via an Azure Express Route. This is the primary inbound traffic route for all purposes.
The one exception is the public sFTP server which has an inbound public IP.
Outbound Connectivity
Outbound traffic is routed via the HSCN Network or to the public internet depending on destination. This traffic is routed via a proxy server for both users and servers.
Traffic from both is controlled with a separate whitelist to ensure that only valid destinations can be accessed.
Changes to the list of whitelisted sites can only be changed via the N&W Datahub CAB.
Firewall
All points of access to the infrastructure (both trusted and untrusted) should be protected via a Firewall. All types of access should be denied by default with only specific agreed access. Traffic into, out of and between the four environments passes through a Palo Alto firewall. That provides advanced modern security functions for all traffic. Alongside the traffic control, the firewall provides full logging including, traffic, threats, and administrative audit trails.
This environment is managed by a specialist third party cloud services provider.
Segmentation
All Data Hub virtual networks should be segmented into subnets based on workload types, sensitivity, and or function (e.g., storage, application tiers, databases).
This isolation should ensure that only the required traffic can traverse specific subnets.
This approach supports the implementation of zero trust architecture, where communication paths are explicitly authorised and monitored.
Network Security Groups (NSGs)
NSGs act as a second firewall layer, controlling inbound and outbound traffic to all Azure resources at both the subnet and network interface level.
All subnets will be configured with inbound and outbound rules blocking all default traffic.
Rules must be explicitly defined to allow only required traffic, following a deny-by-default model.
Rules should be explicit endpoint to endpoint and the use of access to entire subnet ranges should be avoided.
Endpoints
No resources unless explicitly agreed will have public endpoints and all traffic will be via the Private Data Hub network to the Palo Alto firewall.
Auditing of public endpoints will occur regularly.
DMZ
Inbound public traffic is routed directly to an isolated Palo Alto interface and traffic cannot traverse onto the internal network.
DMZ Applications:
Public sFTP – The Public sFTP server runs the CrushFTP’s DMZ server. This acts as a secure proxy gateway, enhancing perimeter security by isolating external client connections from internal systems.
No direct access to internal networks will be possible. Client connections are made only to the DMZ server, which relays traffic to the internal server without exposing internal IPs or services. This works via an encrypted tunnel communication from the internal server out to the DMZ instance, protecting data in transit.
At all times the internal CrushFTP server remains hidden from public access.
File transfers are managed without storing data on the DMZ server, preventing data leakage if the DMZ is compromised.
Management
The Data Hub environment is supported both by the internal team but there is also a support contract with a specialist cloud services provider.
Compliance
The Data Hub utilises the built-in tools Azure provides for governance capabilities, and policy enforcement to support continuous compliance.
Azure Policies
Agreed Azure Policy rules should be enforced to ensure appropriate compliance with NHS security standards. Changes to these rules must be brought to the N&W Data Hub CAB for approval.
These rules include:
- Only approved Azure regiones UK South/West may be used.
- Mandatory use of encryption at rest and in transit.
- Require tagging for resource classification and data ownership.
- Enforce use of managed disks, secure networking, and private endpoints.
- Disallow use of public IPs where not explicitly permitted.
Microsoft Defender for Cloud
Will be configured for all aspects of the Data Hub environment including all storage, and databases and provides continuous assessment of resources against regulatory benchmarks. These include, UK NHS DSP Toolkit, NIST SP 800-53, and the CIS Microsoft Azure Foundations Benchmark.
Azure Monitor
All resources capable of logging activity will be configured to log to the centralised logging resource group. This provides alerting, and monitoring of relevant activities.
Key Storage
All keys and secrets will be held in Azure Key Vaults with appropriate RBAC controls.
Security Reviews and Testing
Vulnerability Scanning
Part of the monitoring will include regular vulnerability scanning including the use of automated Microsoft tools within the Azure Security Centre.
Specifically regular automated vulnerability scanning will be carried out against all storage, databases and virtual machines utilising Defender for Cloud alongside regular third-party vulnerability assessments.
Penetration Tests
The Data Hub must be subject to regular third-party security reviews and penetration testing. This must be to the same level as the requirements of the DSPT. This test must be conducted independently of the ICB by a suitably qualified professional, e.g. CREST certified. To demonstrate assurance, the final report should be shared with all Data Hub partners
The annual IT penetration testing is scoped in negotiation between the ICB SIRO, business and testing team.
Virus and Malware
Microsoft Defender Endpoint protection will be maintained on all Windows and Linux Servers. In addition, as part of the use of Defender for Cloud the Datahub will utilise Microsoft Antimalware for Azure.
Patching and Updating
Patching activity is managed by a specialist third party cloud services provider.
Security patches will be applied in an appropriate time frame as shown below:
- ‘Critical’/’High’ patches should be deployed within 10 days
- All other patches will be deployed as part of the regular automated monthly patching using Azure Update Manager.
Specific details of the Patching at a server level can be found in the Data Hub Infrastructure Devops Wiki.
Backups
See Disaster Recovery Policy.
Software & OS
Support
Only use versions of operating systems and applications that are vendor (or community) supported will be used and these will be maintained as part of the patching to ensure they remain appropriately supported.
Unnecessary Functionality
All unnecessary applications will be Disabled/Removed from the environments. These should not be included in any master images. Should software cease to be used it should be safely decommissioned and removed in a timely manner. Documentation and reference images should also be updated.
Software
Only approved software packages will be installed in the environment, and this will be monitored on a regular basis.
In addition, only approved administrative users will have access to be able to install software on Data Hub assets subject to CAB approval.
The ICB Datahub team shall monitor the installation of any software utilised in the Data Hub environment and keep a record of all software used and ensure that licences are in place. The ICB may be prosecuted if illegally copied software is found to be resident on any one of the information systems in use including laptop computers etc. used outside the normal boundaries of the ICB.
Access
Password changes are forced by the system.
All access will be strictly controlled with the use of RBAC groups.
All group memberships will be reviewed as part of the monthly security review.
All access will be logged and audited as part of the monthly security review.
Users
Formal procedures shall be used to control access to systems on the Data Hub infrastructure and that an individual’s access to systems is required by their job function. Each application for access must be countersigned by an authorised manager.
Access privileges shall be added/ modified/removed – as appropriate – when an individual changes job/leaves (using the use of least privilege principle)
No individual shall be given access to a live system unless properly trained and made aware of their security responsibilities. Users should keep their passwords secure and never disclose them to colleagues. They must not be written down in plain text. Password hints can be used as long as they are personal to the user and not easily understandable by anyone else.
All passwords should be at least 12 characters long and in accordance with NSCS recommendations (https://www.ncsc.gov.uk/blog-post/the-logic-behind-three-random-words) for user generated passwords, should comprise three random words. Passwords cannot be older than 12 months and users cannot use the last 8 passwords they previously used. After initial allocation, passwords must be changed before any transaction can be performed.
All accounts will be configured to use MFA with the Microsoft Authenticator Application. Where a user is unable to use the Authenticator App an appropriate hardware based authenticator (such as the the YubiKey 5 Series) can purchased by the users organisation and will be configured to allow access to the Datahub.
All standard users accounts cannot install any software on devices and internet access will be filtered to a restricted set of known endpoints.
Requests for additional software or for access to other internet endpoints will be considered weekly as part of the Change Advisor Board meetings.
As an ICS resource, users fall into three groups: Local Users; Guest Users; and ‘Managed’ Guest Users.
Local Users
Local user accounts are minimal and are used as administrative purposes only.
Guest Users
Manually added users. All these accounts will be subject to being disabled as part of a weekly automated inactivity check:
- >30 days without activity <45 days = warning email.
- >45 days without activity <60 days = account disabled.
- >60 days without activity = account deleted.
‘Managed’ Guest Users
Managed Guest Users are ones added automatically to the environment based on automatic synchronisation with host organisations. Synchronisation happens weekly and aligns the approved user groups at each organisation to have guest access to the Datahub. When removed from the host organisation the synchronisation will ensure they are removed from the Datahub.
Administration
No administrative accounts will have email enabled.
Administrative accounts will utilise hardware based FIDO authentication tokens.
Any additional software requested can only be installed as part of an approved change request.
Privileged Identity Management (PIM) will be configured for all administrative accounts.
Administrative Accounts will have minimum permissions with additional requirements requested as part of PIM and these must be timebound.
All accesses to all servers comprising the Datahub environment will be done using Azure Bastion and Key vault which stores Windows credentials and Linux SSH Keys. All Key vaults will have independent RBAC permissions applied ensuring granular control.
Data Breaches
Any security incident should be immediately reported to the ICB Information Governance Manager and any subsequent steps taken should be under their direct guidance/recommendations. Consider also whether it should also be reported to the ICB’s Cyber Security Manager.
Incident Management
A security incident is an event which may result in:
- degraded system integrity
- loss of system availability
- disclosure of confidential information
- disruption of activity
- financial loss
- legal action
- unauthorised access to applications
The ICB Technical & Solutions Architect shall report incidents through normal reporting routes as outlined in the ICBs Incident Management Processes , For serious incidents to a director for action. All security incidents that potentially have an impact on the integrity of the HSCN infrastructure shall be reported immediately to information security personnel.
A breach in security shall be properly investigated and resolved if system or process specific and, where appropriate if misuse or non-compliance is apparent, disciplinary action shall be taken.
Each user is personally responsible for ensuring that no actual or potential security breaches occur because of their actions. Users shall ensure that they do not disclose their passwords or allow anyone else to use their credentials to work on any systems, local PCs or network applications.
All security incidents shall be formally logged as part of the ICB IG Incident management process, categorised by severity and action/resolution by the ICB Technical & Solutions Architect (and relevant experts as necessary) who will investigate the root cause of the incident, and management will ensure that lessons are learnt to prevent the incident reoccurring again.
Configuration & Change Control
All changes to the production environment should be submitted via the Norfolk and Suffolk ICB Data Hub Change Advisory Board process. These will be categorised as Standard, Normal or Emergency.
Only changes on the approved list (published to DevOps) can be categorised as Standard Changes.
Any continuous integration processes must be agreed as pre-approved changes prior to implementation in a Live\Production environment.
Change Board meets 09:30 every Tuesday, full details are held in the Data Hub Devops Wiki.