The holiday season brings lots of people to your front door. If you have a front door camera, you may be getting many alerts from your front door that let you know there is motion at the door. It would be great if the front doorbell cameras could take the next step and incorporate #AI facial/image recognition and notify you through #iOS notifications WHO is at the front door and, in some cases, which “uniformed” person is at the door, e.g. FedEx/UPS delivery person.
This facial recognition technology is already baked into Microsoft #OneDrive Photos and Apple #iCloud Photos. It wouldn’t be a huge leap to apply facial and object recognition to catalog the people who come to your front door as well as image recognition for uniforms that they are wearing, e.g., UPS delivery person.
iCloud/OneDrive Photos identify faces in your images, group by likeness, so the owner of the photo gallery can identify this group of faces as Grandma, for example. It may take one extra step for the camera owner to login into the image/video storage service and classify a group of videos converted to stills containing the face of Grandma. Facebook Meta also can tag the faces within pictures you upload and share. The Facebook app also can “guess” faces based on previously uploaded images.
No need to launch the Ring app and see who’s at the front door. Facial recognition can remove the step required to find out what is the motion at the front door and just post the iOS notification with the “who’s there”.
One less step to launching the Ring app and see who is at the front door.
Information architecture (IA) focuses on organizing, structuring, and labeling content in an effective and sustainable way. The goal is to help users find information and complete tasks.
There must be a common consensus, an understanding of each data point collected, and the appropriate labeling and cataloging of the Information Asset. Information assets may have a score attributed to the asset and leveraged in a multitude of ways, such as guidelines for the purging of archives, sensitivity of the information, and the levels of trust.
For each data point collected, correlations/relationships can be added either manually, or through an Induction Engine (AI) leveraging a history of relationships. The definition of hierarchical relationships between data points, and link types (e.g. processor, successor, child, or generally related) further to bolster a larger lexicon.
What are Information Assets?
For example, your phone number is an information asset. Your phone number is provided to everyone you know and is a primary point of reference to contact you. Traditionally, the “phone companies” manage that resource for you. However, in this “new” day and age, we see companies like Google providing a phone number, and as a result providing features not generally available, such as Google Voice, with Call Forwarding, and obfuscation.
Common, Consumer, Information Assets Include:
Documents of ALL Types, e.g. text, spreadsheets, presentations, etc.
Domain Names and Email Addresses are Information Assets.
Twitter, Facebook, Instagram, and Other Social Media Platforms Assets, such as User Names, Post Text, Images, Video, and Profile details.
Skype, WhatsApp, and other VoIP Info Assets such as Phone Number, User Profile information
Windows Teams, Slack, and other Team Collaboration, Information Assets, such as the historical, ongoing posted information in the Team Chat, including the integration of 3rd party apps, such as Whiteboard collaborative drawings.
Passwords, Passwords, Passwords
Common, Corporate, Information Assets Include:
All of the Consumer, Information Assets PLUS
Documents of ALL Types, e.g. Solution Architecture docs, Database Models, HR Policies, Org Charts, Corp. Network Topography, etc.
Disaster Recovery for Information Assets
What happens when the technology managing information assets become “unavailable”? What is your impact assessment? Is there a centralized data/information catalog or repository that contains a partial or complete set of Information Assets?
Information Assets are also passwords, and we have a plethora of “secure” password managers, such as Norton Antivirus provides a mechanism to hold passwords in a virtual “safe”.
Insurance Policies for [digital] Information Assets
What is the cost of securing these Information Assets, verse the payment of recuperating the information assets, if even possible?
What about Hackers that “hold your data/information” hostage?
How to price out “Insurance” for your information, just like safeguarding any other personal articles insurance policies today? Are there “Personal Articles, Insurance Policies” that can currently add a rider to your existing policies? Need to price out “Information Assets”, and the recuperation values?
Norton Life Lock [Personal / Business]
Norton LifeLock reimburses funds stolen due to identity theft up to the limit of the plan total not exceeding $1 Million USD.
Notepads like Notepad++, Microsoft OneNote, and Google Keep are tools that allow their authors to quickly take notes and organize them. A wide array of Information Assets are contained within these applications, such as text, and photos with some data describing the information captured (i.e. metadata). Gathering and exporting this information to reference Information Assets could be a lengthy and laborious process without automation, rules for sorting, and tagging info.
AI Induction and Rules Engines
Dynamically labeling Information Assets as they are “discovered”, an auto curation process. For example, the Microsoft Outlook rules engine has a robust library of canned AI rules for sorting, forwarding, formatting as emails arrive in your inbox, as well as a host of other rules “triggers”. An Induction engine is a predictive instrument that “observes” behavior over time, and then creates/suggests new rules on the basis of the history of user behavior. For example, if MS Outlook had an AI Induction engine, and observed a user ‘almost’ always moving an email with the same subject to folder N, the AI Induction engine could create the rule to anticipate the user’s behavior.
Data Lakes or Sea of Information Assets
Structured, Semi-Structured, and Unstructured data.
Labeling/tagging Information Assets in a consistent fashion.
Retrieval of data, and cross-referenced data types
Description: Alation is a complete repository for enterprise data, providing a single point of reference for business glossaries, data dictionaries, and Wiki articles. The product profiles data and monitors usage to ensure that users have accurate insight into data accuracy. Alation also provides insight into how users are creating and sharing information from raw data. Customers tout the product for its expansive partner ecosystem, and Alation has focused on increasing data literacy when metadata is distributed across business and IT.
My son and I, OneWildRide, are hooked on the Roblox game Theme Park Tycoon 2 I’m fixated on building out my park. For beginners, there are the “out of the box” rides you can buy, and the amount of items you can use to accessorize your park is staggering. Not only can you add “canned” rides, such as the Gravatron, but the theme park builder can add all different types of roller coasters, water rides, park transportation, etc.
Users of the Theme Park Tycoon 2 are Graded by:
number of active users in your park
the amount of money you make based on park admission, pay per ride, and concession stands
People can “like” your park, and provide feedback at the entrance
Commoditizing Roblox Games
I will shamefully admit that I purchased Roblox Bucks, with real dollars, that can be used on a plethora of items to build my Theme Park. For example, the Theme Park has a height limit for how high you can build your roller coasters, so naturally, the builder/user has the ability to purchase to lift the height requirements. You can also purchase additional “packs” that provide the builder enhancements to their rides, such as running the ride in reverse or looping the ride three times instead of the default single loop. There’s also the conversion of USD to Roblox $$ because builders need to buy the components to build water rides or roller coasters. You can even purchase concession stands (e.g. Popcorn Vendors). The builder of the amusement park must also buy/build restrooms and spread out trash cans throughout the park. There is also the concept of day and night, so make sure to buy/place lamps across the park.
Pay to Play – AI Bots = Theme Park $$
These “auto” bots/characters paying to play in your park may leave if they are dissatisfied, such as no bathrooms. Also, without trash cans, there will be visible trash on the ground that must be painfully cleaned up, pile by pile, or left there to pile up. On the flip side, these AI amusement goers will pay:
Park Entrance Fees
Pay Per Ride
Pay to use the loo
Pay for Concession Stands, such as Soft Drinks, Popcorn, and Pizza
Pay for Theme Park Memorabilia, such as Santa Hats, Tis the Season!
The Theme Park Builder sets the prices for EVERYTHING. The AI Bots have “thoughts”, such as “This ride is really cheap.” to help you gauge your ride pricing, or “I’m Hungry”, to imply you should buy/place concession stands throughout your park.
I should say someone should have seen this coming, several someones. You build this Theme Park at the “block” level, very similar to Minecraft, however, it seems, as far as I can tell, the graphics of Roblox seem somewhat superior to Minecraft, although this is a very debatable topic. Minecraft has lots of 3rd party “mods” or customizations/modifications to the game. Minecraft has had a lot of time to cultivate its userbase as well as a marketplace for users to buy these modifications. Roblox as an application/gaming platform seems intriguing in light of the IPO. I wonder what the highest-grossing games are on the Roblox platform.
Roblox Theme Park Tycoon 2 is available on Xbox, iPad / iPhone, and Windows to name the environments we use, jumping from device to device wherever is convenient.
My son constantly wants me to go over to his Theme Park, and go on rides he has just built. It’s really a lot of fun to go to other builders’ parks. There is a basic transit system to move between amusement parks. You can get LOTS of ideas by looking at other builders’ parks, some of these parks put the “real world” amusement parks to shame. So far, I’ve seen six (6) people playing concurrently, where you can see who has the most Roblox Bucks, and who’s park has the most visitors currently. Naturally, if you’re not the big kahuna, you’ll want to stroll by the other builders’ parks. If you are in close proximity, if you time it right, you can log in to the same server, and play with friends. Doesn’t always seem to work quite right when people jump on and off the game. There is probably a feature I’m not using to guarantee the same server with friends, maybe the “Premium” version of Roblox?
Build Your Own Roblox Games? Monetary Incentives?
Wow, I really didn’t contemplate it that much. I didn’t even think about the possible monetary returns from building one’s own Roblox game. Not sure what the requirements would be to be a developer, how easy or hard it would be to build Roblox games, i.e. is there a coding language to use, a proprietary language, or just a simple graphical tool to build games. No clue if there is a “developer/partner” annual cost, which is what I paid when developing applications for the iPhone / iPad. Also, playing on the iPad / iPhone Roblox platform hosting the Theme Park game, would Apple get a percentage of “In-App” purchases for Roblox dollars? We purchased Roblox bucks from the PC, and XBOX, so it didn’t occur to me there would be margin paid to the platform on which it runs.
Disclosure – I am not a “Premium” Roblox member or a “game” builder.
When people think of Data Loss Prevention, we usually think of Endpoint protection, such as Symantec Endpoint Security solution, preventing the upload of data to web sites, or downloaded to a USB device. The data being “illegally” transferred typically conforms to a particular pattern such as Personal Identifiable Information (PII), i.e. Social Security numbers.
Using a client for local monitoring of the endpoint, the agent detects the transfer of information as a last line of defense for external distribution. EndPoint solutions could monitor suspicious activity and/or proactively cancel the data transfer in progress.
Moving closer to the source of the data loss, monitoring databases filled with Personal Identifying Information (PII) has its advantages and disadvantages. One may argue there is no data loss until the employee attempts to export the data outside the corporate network, and the data is in-flight. In addition, extracted PII data may be “properly utilized” within the corporate network for analysis.
There is a database solution that provides similar “endpoint” monitoring and protection, e.g. identifying PII data extraction, with real-time query cancellation upon detection, leveraging “out of the box” data patterns, Teleran Technologies. Teleran supports relational databases such as Oracle, and Microsoft SQL Server, both on-prem, and cloud solutions.
Updates in Data Management Policies
Identifying the data loss points of origination provides opportunities to update the gaps in data management policy and the implementation of additional controls over data. Data classification is done dynamically based on common data mask structures. Users may build additional rules to cover custom structures. So, for example, a business analyst executes a query against a database that appears to fit predefined data masks, such as SSN, the query may be canceled before it’s even executed, and/or this “suspicious” activity can be flagged for the Chief Information Officer and/or Chief Security Officer (CSO)
Endpoints in our corporate environments of prevalent remote working may highlight the need that relying on endpoints may be too late to enforce data protection. We may need to bring potential data loss detection into the inner sanctum of the corporate networks and need prevention closer to the source of data being extracted. How are “semi-trusted” third parties such as staff augmentation from offshore dealt?
Endpoint DLP – Available Breach Tactics
Endpoint DLP may capture and contain attempts to extract PII data, for example, parsing text files for SSNs, or other data masks. However, there are ways around the transfer detection, making it lofty to identify, such as screen captures of data, converting from text into images. Some Endpoint providers boast about their Optical Character Recognition (OCR), however, turning on this feature may produce many false positives, too many to sift through in monitoring, and unmanageable to control. The best DLP defense is to monitor and control closer to the data source, and perhaps, flag data requests from employees, e.g. after SELECT statement entered, UI Pops up a “Reason for Request?” if PII extraction is identified in real-time, with auditable events that can flow into Splunk.
Going the consulting path, on your own, is no small feat. Do you have what it takes to persist, survive, and thrive?
Army of One – Not only do you need to perform your CONSULTANCY role, but you also have to be bookkeeper, sales and marketing, looking for new opportunities.
The Gap Between Gigs – To all recruiters and hiring managers – it’s not a bad thing to have gaps in a candidate’s resume. Its the way of life in our gig economy. We are constantly hunting for just the right opportunity in a sea of hundreds or thousands of candidates per role.
Keeping Up With Market Trends – Online learning platforms such as Pluralsight, keep their content fresh, relevant, and in line with your career path.
Networking, Networking, Networking – at every opportunity, build your network of contacts and keep them in the know
Over the last two decades, I’ve been involved in several solutions that incorporated artificial intelligence and in some cases machine learning. I’ve understood at the architectural level, and in some cases, a deeper dive.
I’ve had the urge to perform a data trending exercise, where not only do we identify existing trends, similar to “out of the box” Twitter capabilities, we can also augment “the message” as trends unfold. Also, probably AI 101. However, I wanted to submerge myself in understanding this Data Science project. My Solution Statement: Given a list of my interests, we can derive sentence fragments from Twitter, traverse the tweet, parsing each word off as a possible “breadcrumb”. Then remove the Stop Words, and voila, words that can identify trends, and can be used to create/modify trends.
Finally, to give the breadcrumbs, and those “words of interest” greater depth, using the Oxford Dictionaries API we can enrich the data with things like their Thesaurus and Synonyms.
Gotta Have a Hobby
It’s been a while now that I’ve been hooked on Microsoft Power Automate, formerly known as Microsoft Flow. It’s relatively inexpensive and has the capabilities to be a tremendous resource for almost ANY project. There is a FREE version, and then the paid version is $15 per month. No brainer to pick the $15 tier with bonus data connectors.
I’ve had the opportunity to explore the platform and create workflows. Some fun examples, initially, using MS Flow, I parsed RSS feeds, and if a criterion was met, I’d get an email. I did the same with a Twitter feed. I then kicked it up a notch and inserted these records of interest into a database. The library of Templates and Connectors is staggering, and I suggest you take a look if you’re in a position where you need to collect and transform data, followed by a Load and a notification process.
What Problem are we Trying to Solve?
How are trends formed, how are they influenced, and what factors influence them? The most influential people providing input to a trend? Influential based on location? Does language play a factor on how trends are developed? End Goal: driving trends, and not just observing them.
The data set is arguably the most important aspect of Machine Learning. Not having a set of data that conforms to the bell curve and consists of all outliers will produce an inaccurate reflection of the present, and poor prediction of the future.
First, I created a table of search criteria based on topics that interest me.
Then I created a Microsoft Flow for each of the search criteria to capture tweets with the search text, and insert the results into a database table.
Out of the total 7450 tweets collected from all the search criteria, 548 tweets were from the Search Criteria “Learning” (22).
After you’ve obtained the data, you will need to parse the Tweet text into “breadcrumbs”, which “lead a path” to the Search Criteria.
Machine Learning and Structured Query Language (SQL)
This entire predictive trend analysis could be much easier with a more restrictive syntax language like SQL instead of English Tweets. Parsing SQL statements would be easier to make correlations. For example, the SQL structure can be represented such as: SELECT Col1, Col2 FROM TableA where Col2 = ‘ABC’. Based on the data set size, we may be able to extrapolate and correlate rows returned to provide valuable insights, e.g. projected impact performance of the query to the data warehouse.
R language and R Studio
Preparing Data Sets Using Tools Designed to Perform Data Science.
R language and R Studio seems to be very powerful when dealing with large data sets, and syntax makes it easy to “clean” the data set. However, I still prefer SQL Server and a decent query tool. Maybe my opinion will change over time. The most helpful thing I’ve seen from R studio is to create new data frames and the ability to rollback to a point in time, i.e. the previous version of the data set.
Changing column data type on the fly in R studio is also immensely valuable. For example, the data in the column are integers but the data table/column definition is a string or varchar. The user would have to drop the table in SQL DB, recreate the table with the new data type, and then reload the data. Not so with R.
First, there was Spell Check, next Thesaurus, Synonyms, contextual grammar suggestions, and now Persona, Point of View Reviews. Between the immensely accurate and omnipresent #Grammarly and #Google’s #Gmail Predictive Text, I starting thinking about the next step in the AI and Human partnership on crafting communications.
Google Gmail Predictive Text
Google gMail predictive text had me thinking about AI possibilities within an email, and it occurred to me, I understand what I’m trying to communicate to my email recipients but do I really know how my message is being interpreted?
Google gMail has this eerily accurate auto suggestive capability, as you type out your email sentence gMail suggests the next word or words that you plan on typing. As you type auto suggestive sentence fragments appear to the right of the cursor. It’s like reading your mind. The most common word or words that are predicted to come next in the composer’s eMail.
In the software development world, it’s a categorization or grouping of people that may play a similar role, behave in a consistent fashion. For example, we may have a lifecycle of parking meters, where the primary goal is the collection of parking fees. In this case, personas may include “meter attendant”, and “the consumer”. These two personas have different goals, and how they behave can be categorized. There are many such roles within and outside a business context.
In many software development tools that enable people to collect and track user stories or requirements, the tools also allow you to define and correlate personas with user stories.
As in the case of email composition, once the email has been written, the composer may choose to select a category of people they would like to “view from their perspective”. Can the email application define categories of recipients, and then preview these emails from their perspective viewpoints?
What will the selected persona derive from the words arranged in a particular order? What meaning will they attribute to the email?
Use Personas in the formulation of user stories/requirements; understand how Personas will react to “the system”, and changes to the system.
Finally the use of the [email composer] solution based on “actors” or “personas”. What personas’ are “out of the box”? What personas will need to be derived by the email composer’s setup of these categories of people? Wizard-based Persona definitions?
There are already software development tools like Azure DevOps (ADO), which empower teams to manage product backlogs and correlate “User Stories”, or “Product Backlog Items” with Personas. These are static personas, that are completely user-defined, and no intelligence to correlate “user stories” with personas”. Users of ADO must create these links.
Now, technology can assist us to consider the intended audience, a systematic, biased perspective using Artificial Intelligence to inspect your email based on selected “point of view” (a Persons) of the intended email. Maybe your email will be misconstrued as abrasive, and not the intended response.