Skip to main content

Architecture

High level network diagram

Emily McMakin avatar
Written by Emily McMakin
Updated over 5 months ago

The CAVO® system is built with AWS to deliver a secure HIPAA compliant environment capable of scaling to the needs of your enterprise.

All data at rest is encrypted and all data in transit is encrypted.

Flow

Data arrives via either drag and drop, API, SFTP, or HL7 delivery to AWS architecture as depicted below. SFTP utilizes encryption at rest and in transit. Web frontend for SFTP utilizes TLS 1.2. Data is processed and consumed by the Application servers. Temporary files are stored on an encrypted volume during processing and then pushed to an AES 256-bit encrypted S3 bucket specific to client (clients do not share S3 buckets), deleted, and a message is sent to SQS for subsequent OCR processing. An OCR Lambda function processes each SQS message by downloading the PDF page to an encrypted volume and performing optimization and OCR routines. Each client has a distinct database instance for housing their data. Clients do not share database credentials.

Application servers expose only port 80 and port 443. Port 80 is exposed only to redirect to 443.

Production servers are not accessible by anyone. They contain no credentials or inbound ports other than 443. Servers reside behind a load balancer on private IP addresses.

AWS Services Utilized

EC2, SQS, S3, ELB, Cloudwatch, Lambda, Sagemaker, Cloudfront, ECS
Access granted to: VP Engineering, Senior Application Developer

Data Center us-east-1
7600 Doane Dr.
Manassas, VA 20109

Data Center us-west-2
91088 Ball Ln,
Grass Valley, OR 97029

Code Repository

Code versioning is maintained at github utilizing private/public keypairs. No credentials are stored in the codebase.

Access granted to: VP Engineering, Senior Application Developer

Software Stack

Backend Databases: AWS Aurora PostgreSQL and OpenSearch

APIs: NodeJS, Python, Golang

Frontend: Angular

Session Management

CAVO® utilizes a JSON Web Token in order to prove that a user has been properly authenticated. This token also identifies which resources the user can access. As the user navigates through the application, the token is referenced in every request in order to validate these resources. Since it is likely that the user will traverse web servers based upon the load of a given server, we use a remote store to house the data that the user will need in order to maintain consistency.

We chose to use Redis since it opens a socket and has very low latency. The data stored on Redis is transient data only necessary for each user session. It does not contain PHI. Regardless, we encrypt the data before sending to Redis and decrypt upon receipt from Redis.

Did this answer your question?