All posts by Ian Roseman

As a space ranger, member of Star Command, I protect the universe from Evil Emperor Zurg. I also like SciFi, Tech, Philosophy, and Sociology.

Best Learning Management Systems for Education | ZDNet

In the context of our new normal, we present our guide of 12 very capable learning management systems, each of which provides a different take on managing learning. We’ll show you LMS solutions tailored to K-12 and higher education, LMS solutions aimed at the enterprise and SMBs, and even one that helps you sell your own courses (AbsorbLMS) and another that optimizes the ability to provide custom certifications (TalentLMS).

Source: The best learning management systems for education, enterprise, and small business | ZDNet

Challenges beyond learning technologies for K-12 education are the social development challenges, especially in the younger ages transitioning into the education system.

Also, special needs children who require direct attention, and are pulled out of mainstream class time for one on one assistance are at the highest risk.  We need to figure out how to best provide for these children in a “remote-only” configuration were “close contact” is required to help mitigate issues such as attention deficit.

Data Loss Prevention (DLP) for Structured Data Sources

When people think of Data Loss Prevention, we usually think of Endpoint protection, such as Symantec Endpoint Security solution, preventing the upload of data to web sites, or downloaded to a USB device. The data being “illegally” transferred typically conforms to a particular pattern such as Personal Identifiable Information (PII), i.e. Social Security numbers.

Using a client for local monitoring of the endpoint, the agent detects the transfer of information as a last line of defense for external distribution. EndPoint solutions could monitor suspicious activity and/or proactively cancel the data transfer in progress.

Moving closer to the source of the data loss, monitoring databases filled with Personal Identifying Information (PII) has its advantages and disadvantages. One may argue there is no data loss until the employee attempts to export the data outside the corporate network, and the data is in-flight. In addition, extracted PII data may be “properly utilized” within the corporate network for analysis.

There is a database solution that provides similar “endpoint” monitoring and protection, e.g. identifying PII data extraction, with real-time query cancellation upon detection, leveraging “out of the box” data patterns, Teleran Technologies. Teleran supports relational databases such as Oracle, and Microsoft SQL Server, both on-prem, and cloud solutions.

Updates in Data Management Policies

Identifying the data loss points of origination provides opportunities to update the gaps in data management policy and the implementation of additional controls over data. Data classification is done dynamically based on common data mask structures. Users may build additional rules to cover custom structures. So, for example, a business analyst executes a query against a database that appears to fit predefined data masks, such as SSN, the query may be canceled before it’s even executed, and/or this “suspicious” activity can be flagged for the Chief Information Officer and/or Chief Security Officer (CSO)

Bar none, I’ve seen only one firm that defends a company’s data assets closer to the probable leak of information, the database, Teleran Technologies, See what they have to offer your organization for data protection and compliance.

Prevalent Remote Work Changes Endpoint Strategy

Endpoints in our corporate environments of prevalent remote working may highlight the need that relying on endpoints may be too late to enforce data protection. We may need to bring potential data loss detection into the inner sanctum of the corporate networks and need prevention closer to the source of data being extracted. How are “semi-trusted” third parties such as staff augmentation from offshore dealt?

Endpoint DLP – Available Breach Tactics

Endpoint DLP may capture and contain attempts to extract PII data, for example, parsing text files for SSNs, or other data masks. However, there are ways around the transfer detection, making it lofty to identify, such as screen captures of data, converting from text into images. Some Endpoint providers boast about their Optical Character Recognition (OCR), however, turning on this feature may produce many false positives, too many to sift through in monitoring, and unmanageable to control. The best DLP defense is to monitor and control closer to the data source, and perhaps, flag data requests from employees, e.g. after SELECT statement entered, UI Pops up a “Reason for Request?” if PII extraction is identified in real-time, with auditable events that can flow into Splunk.

AR Sudoku Solver Uses Machine Learning To Solve Puzzles Instantly

Very novel concept, applying Augmented Reality and Artificial Intelligence (i.e. Machine Learning) to solving puzzles, such as Sudoko.  Maybe not so novel considering AR uses in manufacturing.

Next, we’ll be using similar technology for human to human negotiations, reading body language, understanding logical arguments, reading human emotion, and to rebut remarks in a debate.

Litigators watch out… Or, co-counsel?   Maybe a hand of Poker?

Source: AR Sudoku Solver Uses Machine Learning To Solve Puzzles Instantly

Cloud-native as the Future of Data Loss Prevention – Nightfall AI

An interesting approach to Data Loss Prevention (DLP)

Data loss prevention (DLP) is one of the most important tools that enterprises have to protect themselves from modern security threats like data exfiltration, data leakage, and other types of sensitive data and secrets exposure. Many organizations seem to understand this, with the DLP market expected to grow worldwide in the coming years. However, not all approaches to DLP are created equal. DLP solutions can vary in the scope of remediation options they provide as well as the security layers that they apply to. Traditionally, data loss prevention has been an on-premise or endpoint solution meant to enforce policies on devices connected over specific networks. As cloud adoption accelerates, though, the utility of these traditional approaches to DLP will substantially decrease.

Established data loss prevention solution providers have attempted to address these gaps with developments like endpoint DLP and cloud access security brokers (CASBs) which provide security teams with visibility of devices and programs running outside of their walls or sanctioned environments. While both solutions minimize security blind spots, at least relative to network layer and on-prem solutions, they can result in inconsistent enforcement. Endpoint DLPs, for example, do not provide visibility at the application layer, meaning that policy enforcement is limited to managing what programs and data are installed on a device. CASBs can be somewhat more sophisticated in determining what cloud applications are permissible on a device or network, but may still face similar shortfalls surrounding behavior and data within cloud applications.

Cloud adoption was expected to grow nearly 17% between 2019 and 2020; however, as more enterprises embrace cloud-first strategies for workforce management and business continuity during the COVID-19 pandemic, we’re likely to see even more aggressive cloud adoption. With more data in the cloud, the need for policy remediation and data visibility at the application layer will only increase and organizations will begin to seek cloud-native approaches to cloud security.

Source: Cloud-native as the Future of Data Loss Prevention – Nightfall AI

Level Up Social Media with Microsoft Power Automate

Create Automated Workflows with Microsoft Power Automate

I’ve been using this powerful workflow automation platform since it was called Microsoft Flow, and was free for low volume usage. Essentially, users can pick any sources of data, create triggers, transform data to a multitude of target systems, and notify through a multitude of opportunities, such as eMail and push notifications. The platform is boundless through “Connectors” to just about any 3rd party platforms from SalesForce to an Oracle database. The basic plan after the free trial is 15 USD per month.

Connectors and Templates: Ready, Set, Go

1st, define your connectors, such as your Google Email account connection details, and your Twitter account information. 2nd, select from one of the many “out of the box” predefined templates, such as from the “Social Media” category.

MSFT Power Automate Templates
MSFT Power Automate Templates

Twitter Use Cases – Configure In Minutes

Once you’ve signed up for the Power Automate SaaS platform, you can start creating workflows in minutes. At first I used the “Templates”, but it is much easier to create workflows from scratch. Here are a few opportunities foe getting started

Retweet based on Tweet Search Criteria

  • Define what tweets you would like to retweet using query search criteria of words, a combination of hashtags, phrases with simple AND and OR logic.
  • Optionally, add a condition before performing an action within the workflow. In this case, we can allow the retweet only if the retweet count is greater than N retweets.
  • Select the returned Tweet ID to perform the “Retweet” action
  • Optionally, add notifications, such as emailing yourself each time you retweet and include elements of the tweet within your Email, such as the tweet text, tweet user ID, or a dozen of other tweet elements.

Catalog Tweets when they meet your Tweet Criteria

  • Define what tweets you would like to store in your “data” repository.
  • Select from one of a multitude of data targets ranging from Excel spreadsheets, Google Sheets, SQL Server, Oracle Database, and dozens of other repositories.
  • Based on the data target, perform the mapping of available tweet elements to the data target fields, such as a database, table, and fields.

And Beyond

Microsoft Power Automate can do automated workflows beyond the social media capabilities highlighted here. I have a wish list of “Triggers” and “Actions” not yet supported by the platform. I’d like to have the same “Trigger” criteria we have with Twitter extended to LinkedIn, and trigger LinkedIn posts based on query criteria, extract, and load into en external data source.

Best Kept Secret of Azure DevOps by Microsoft – Feature and Epic Roadmap

One of the first hurdles to get over when working with a manager who is accustomed to working with Waterfall projects:

Show me our milestones for this project, and when are theses project artifacts to be delivered? Is there a timeline that articulates our deliverables? I want to know when I should get engaged in the project, such as when milestone delivery dates’ slip, and we need to revisit or rebaseline our projected delivery timetable.

Going through the agile transformation on the team level, invoking the Agile Values empowers the team to “Respond to Change”, which may deviate from our initially targeted “milestones”. Not only the timetable may shift, but the milestone, and what it represents may significantly change, and that’s OK with an Agile team. Product stakeholders outside the team may not be adaptive to changes in deliverables. “Outside” stakeholders may not be engaged in the cadence of Scrum ceremonies.

Four Agile Values
Four Agile Values

When working with Agile toolsets like JIRA, and Azure DevOps, a Gantt chart does not traditionally come to mind. We think of a product backlog and user story commitments to the current, and next sprint(s). Maybe we are targeting several sprints of work transparency, such as leveraged with SAFe, and Planning (IP) Iteration. We’re still not seeing the visuals in the “traditional” style from Waterfall efforts.

Azure DevOps Provides the Necessary Visuals

So, how do we keep our “outside” product Stakeholders engaged in the product life cycle without inviting them to all Scrum ceremonies? We don’t have Gantt charts, but we do have “Feature timeline and Epic Roadmap” as a plugin to Azure DevOps through the Microsoft Marketplace, for FREE by Microsoft DevLabs. To me, this functionality should be “out of the box”, but apparently this was not the case. I had to have the need/pain in order for me to do research to find this plugin and install it in our enterprise environment. Why would Microsoft disassociate itself with this plugin to some small degree? I can only hypothesize, like the man in the grassy knoll. Regardless of why, “It’s in there, ready for you to install

Articulate Epics, Features, and User Stories

1. Populate the Product Backlog with Features and Epics

Using Azure DevOps, during the initial phase of the effort, Sprint 0, work with your Product Owner to catalog the Features you are looking to deliver within your product evolution, i.e. Project. Each of these features should roll up into Epics, also commonly called Themes. Epics are the highest level of articulation of delivery.

2. Define User Stories, and Attribute them to Features

Working with the Product Owner, and the implementation team, create User Stories in the Product backlog which will help the team to implement the Feature set. Make sure to correlate each of the User Stories to the Features defined in your Product Backlog. User Story, effort estimations would also be helpful to determine “how big”, i.e. how many sprints it will take to implement the feature.

3. Plan Feature Delivery Within / Across Sprints

Within Azure DevOps, Boards –> Backlogs, Team Backlog, and select “Feature Timeline”. From there, you are able to drop, drag, and define the periods of Feature delivery.

  • All Sprints are displayed as Columns horizontally across the top of the chart. There is an indicator of the current sprint.
  • On the left side are Epics, and the rows REPRESENT Features within the Epics.
  • Select the box, “Plan Features”, and a column of unplanned Features will appear to the right of the screen.
Feature Timeline - Plan Features Step 0
Feature Timeline – Plan Features Step 0
  • Drop and Drag a Feature from the list of unplanned Features into one of the defined Sprints. Deselect “Plan Features”, and then select the “Info” icon on the planned Feature. A Feature dialog box will appear to the user with all of the User Stories associated with the Feature.
  • User can drop and drag User Stories from the “Backlog” column to any of the Sprint buckets.
  • Finally, the user should define the Start Iteration and End Iteration for each feature, showing how Features span multiple sprints and an estimation of when the Feature work will conclude.
Feature Planning - Feature, User Story, Sprint Planning
Feature Planning – Feature, User Story, Sprint Planning
  • Note, although Features may span multiple sprints, User Stories cannot within this Feature planning view of Azure DevOps. The approach of a single user story fitting into a single sprint makes sense as implemented in the “Agile Mindset”.

The Final Product – Epic and Feature Roadmap

Epic and Feature Roadmap
Epic and Feature Roadmap

Drawback

Although this view is immensely valuable to articulate to ALL stakeholders at both a high and low-level, Epic, Feature to the User Story, there is no Print capability, just as annoying as trying to print out Gantt charts.

Alternatives

Microsoft 365 Project offers the capability of building Roadmaps and Timeline (Gantt) views. From Microsoft Project 365, the user connects to the Azure DevOps server in order to import all of the User Stories desired to track. At first glance, the user would be tracking Azure DevOps, User Stories, which, in my opinion, should be done at the Feature level, one layer of abstraction for business communication.

MS Project Roadmap
MS Project Roadmap

The other aspect of MS 365 Project, is the cost, three tiers, and if you want to use the Roadmap capability, it’s $30 per user/month. Here’s a video blog, 4-minute video that shows how to get started.

Agile Adoption Challenges: Outside the Circle of Trust

  • Outside the Product Owner and the implementation team, senior stakeholders may require milestones articulating deliverables.
    • Epics or Themes, high-level declaration of the “Release” essence, rolls up from Features, and Product Backlog Items (PBI). Relative effort estimations may be applied at the PBI level, and then rolled up to calculate/guestimate the duration of Epics.
    • Look toward SAFe (Scaled Agile Framework) to change the culture by providing an opportunity for the entire organization to participate in the Agile process. “Product Increments” present windows of opportunity every 8 to 10 weeks.
    • Product Increments may involve multiple scrum teams, their scope, and how these teams may intersect. In order to synchronize these Scrum Teams, SAFe introduces Agile Release Trains (ART), and Release Train Engineers (RTE) to coordinate cadence of the scrum teams to be in alignment with Epic and Feature deliverables.
  • Stakeholders may require a “waterfall” plan to understand delivery timeframes for milestone artifact deliverables. For example, “When do we deliver in the plan? We have dependencies on XYZ to build upon and integrate”
    • External teams may have dependencies on artifacts delivered in the plan thus cross scrum team interaction is critical, sometimes through a reoccurring ceremony “Scrum of Scrums“.
  • Additional transparency into the scrum team or the “Circle of Trust” can be provided through the use of Dashboards. Dashboards may contain widgets that produce real-time views into the current initiative. Key Project Indicators (KPIs), metrics being monitored to determine the success of Product ABC Epic Phase completion.
    • Dashboards may include: Average Team Velocity, Burn Down, Burn Up, Bug Status by Severity, and metrics that are initiative focused, e.g. N out of Y BI Reports have been completed.

Sprint Planning Session: Star-Lord Debuts as PO

I can’t help but chuckle at this scene with Peter Quill and the rest of “the scrum team” as they “deep dive” on the plan. It sounds more like the waterfall approach, the stakeholder and Project Charter on a napkin.

Highlights:

  • Product Owner knowing a relatively small portion of “the plan” before executing the plan. Fail Fast, and Fail Often.