An interesting approach to Data Loss Prevention (DLP)
Data loss prevention (DLP) is one of the most important tools that enterprises have to protect themselves from modern security threats like data exfiltration, data leakage, and other types of sensitive data and secrets exposure. Many organizations seem to understand this, with the DLP market expected to grow worldwide in the coming years. However, not all approaches to DLP are created equal. DLP solutions can vary in the scope of remediation options they provide as well as the security layers that they apply to. Traditionally, data loss prevention has been an on-premise or endpoint solution meant to enforce policies on devices connected over specific networks. As cloud adoption accelerates, though, the utility of these traditional approaches to DLP will substantially decrease.
Established data loss prevention solution providers have attempted to address these gaps with developments like endpoint DLP and cloud access security brokers (CASBs) which provide security teams with visibility of devices and programs running outside of their walls or sanctioned environments. While both solutions minimize security blind spots, at least relative to network layer and on-prem solutions, they can result in inconsistent enforcement. Endpoint DLPs, for example, do not provide visibility at the application layer, meaning that policy enforcement is limited to managing what programs and data are installed on a device. CASBs can be somewhat more sophisticated in determining what cloud applications are permissible on a device or network, but may still face similar shortfalls surrounding behavior and data within cloud applications.
Cloud adoption was expected to grow nearly 17% between 2019 and 2020; however, as more enterprises embrace cloud-first strategies for workforce management and business continuity during the COVID-19 pandemic, we’re likely to see even more aggressive cloud adoption. With more data in the cloud, the need for policy remediation and data visibility at the application layer will only increase and organizations will begin to seek cloud-native approaches to cloud security.
Several of the relational database software vendors, such as IBM, Oracle, and Teradata have developed proprietary data warehouse software to be tightly coupled with server hardware to maximize performance. These solutions have been developed and refined as “on-prem” solutions for many years.
We’ve seen the rise of “Database (DW) as a Service” from companies like Amazon, who sell Redshift services.
Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds.
RDB Complex Software/Hardware Maintenance
In recent times, the traditional relational database software vendors shifted gears to become service providers offering maximum performance from a solution hosted by them, the vendor, in the Cloud. On the positive side, the added complexity of configuring and tuning a blended software/hardware data warehouse has been shifted from the client’s team resources such as Database Administrators (DBAs), Network Administrators, Unix/Windows Server Admins,… to the database software service provider. The complexity of tuning for scalability, and other maintenance challenges shifts to the software vendor’s expertise, if that’s the abstraction you select. There is some ambiguity in the delineation of responsibilities with the RDBMS vendor’s cloud offerings.
Total Cost of Ownership
Quantifying the total cost of ownership of a solution may be a bit tricky, especially if you’re trying to quantify the RDBMS hybrid software/hardware “on-prem” solution versus the same or similar capabilities brought to the client via “Database (DW) as a Service”.
“On-Prem”, RDB Client Hosted Solution
Several factors need to be considered when selecting ANY software and/or Hardware to be hosted at the client site.
Infrastructure “when in Rome”
Organizations have a quantifiable cost related to hosting physical or virtual servers in the client’s data center and may be boiled down to a number that may include things like HVAC, or new rack space.
Resources used to maintain/monitor DC usage, there may be an abstracted/blended figure.
Database Administrators maintain and monitor RDB solutions.
Activities may range from RDB patches/upgrades to resizing/scaling the DB storage “containers”.
Application Database Admins/Developers may be required to maintain the data warehouse architecture, such as new requirements, e.g. creating aggregate tables for BI analysis.
Windows/Unix Server Administrators
Trying to correlate these costs in some type of “Apples to Apples” comparison to the “Data Warehouse as a Service” may require accountants and technical folks to do extensive financial modeling to make the comparison. Vendors, such as Oracle, offer fully managed services to the opposite end of the spectrum, the “Bare Metal”, essentially the “Infra as a Service.” The Oracle Exadata solution can be a significant investment depending on the investment in redundancy and scalability leveraging Oracle Real Application Clusters (RAC).
Support and Staffing Models for DW Cloud Vendors
In order for the traditional RDB software vendors to accommodate a “Data Warehouse as a Service” model, they may need to significantly increase staff for a variety of technical disciplines, as outlined above with the Client “On-Prem” model. A significant ramp-up of staff and the organizational challenges of developing and implementing a support model based on a variety of factors may have relational database vendors ask: Should they leverage a top tier consulting agency such as Accenture, or Deloitte to define, implement, and refine a managed service? It’s certainly a tall order to go from a software vendor to offering large scale services. With corporate footprints globally and positive track records implementing managed services of all types, it’s an attractive proposition for both the RDB vendor and the consulting agency who wins the bid. Looking at the DW Service billing models don’t seem sensical on some level. Any consulting agency who implements a DW managed service would be responsible to ensure ROI both for the RDS vendor and their clients. It may be opaque to the end client leveraging the Data Warehouse as a Service, but certainly, the quality of service provided should be nothing less than if implemented by the RDB vendor itself. If the end game for the RDB vendor is for the consulting agency to implement, and mature the service then at some point bring the service in-house, it could help to keep costs down while maturing the managed service.
Here are URLs for reference to understand the capabilities that are realized through Oracle’s managed services.
Based on the conditions of the markets, i.e. the dissolution of Net Neutrality, companies like Akamai are primed to present attractive solutions to a bandwidth constrained market. Akamai historically has been a market leader in this space, along with Amazon’s CloudFront solution. So, I take pause by these actions, although on the surface Akamai has market dominance in this growth area, are there other potential impeding factors:
Akamai’s business operating plan needs to be retooled to compete with ever-increasing competitors into space once dominated.
Projected (i.e. inside information) regarding FCC regulations that will put Akamai at a market disadvantage. Lobbyists!