Category Archives: Technology

Ciphers Thwart Generative AI Plagiarism

Watermarks in Plaintext

Combined with massive compute cycles, multiple ciphers can be embedded in Generative AI text output without the recipient’s knowledge of which ciphers are applied. One or more ciphers can be used. The more text the AI generates, the more ciphers are applied.

If a person attempts to use an AI-generated text fraudulently, such as submitting a research paper to a professor, the professor could electronically upload/scan the submitted research paper to ChatGPT, assuming ChatGPT, based on current popular opinion. This new interface will then scan their massive library of Book Cipher keys looking to detect one or more ciphers that may have been used when the text was generated. The larger number of cipher algorithms and the amount of generative AI text output becomes more secure.

Generative AI embeds multiple text watermarks that can be identified. It’s not bullet proof whereby the person using the generative AI output can attempt to move the words around, use different words, synonymous for words, and even move paragraphs around.

The output from a ChatGPT scan to search for ciphers can be used to determine the probability (%) that generative AI was used to produce the document.

The AI community, at large, can produce a centralized hub for all documents to be searched regardless of the Generative AI bot used. All the Generative AI company participants using the same hub would close the possible gap to increase the identification of Generative AI output.

Application of a Book Cipher

In the 2004 film National Treasure, a book cipher (called an “Ottendorf cipher”) is discovered on the back of the U.S. Declaration of Independence, using the “Silence Dogood” letters as the key text.

How a “Book Cipher” Works

book cipher is a cipher in which each word or letter in the plaintext of a message is replaced by some code that locates it in another text, the key. A simple version of such a cipher would use a specific book as the key, and would replace each word of the plaintext by a number that gives the position where that word occurs in that book. For example, if the chosen key is H. G. Wells‘s novel The War of the Worlds, the plaintext “all plans failed, coming back tomorrow” could be encoded as “335 219 881, 5600 853 9315” — since the 335th word of the novel is “all“, the 219th is “plans“, etc. This method obviously requires that the sender and receiver have the exact same key book.

The Book Cipher can also be applied using letters instead of words, requiring fewer words to apply ciphers.

Solution Security

Increase the Number of Book Ciphers

The increased number of ciphers in one Generative AI “product” decreases the ability to “reverse engineer” / solve Book Ciphers. The goal is to embed ciphers in Generative AI “products”, as many as technically possible.

Increase the Complexity of Inserted Book Ciphers

Leveraging a “Word Search” like approach, the path to identify the words or letters in the Generative AI “Product” may not need to be read/scanned like English, from left to right. It may be read/scanned for cipher components from top to bottom or right to left.

How to Keep “Book Ciphers” for Generative AI Publically Secret

Privacy: Exposing AI Chat Plugin Access to User’s Conversation History.

There are many benefits for allowing third-party plugin access to a user’s Chat history. For example, an OpenAPI, ChatGPT Plugin could periodically troll through a user’s chat history and proactively follow up on a conversation thread that appears to still be open-ended. Or, periodically, the Chat Plugin could aggregate the chat history into subjects by “smart tagging” conversations and then ask the user if they want to talk about the Manchester United, football game last night. Note, in the case of OpenAPI ChatGPT, it has “Limited knowledge of world and events after 2021.” Also, note presently OpenAPI ChatGPT API or the ChatGPT plugin has not exposed the user’s chat history.

3rd Party, Security Permissions for OpenAI, ChatGPT API, and Plugins

Just like authenticating 3rd party apps with your Google credentials, allowing the app to access Google user’s data, this level of authentication should be presented to the user, i.e., “Would you like to allow XYZ ChatGPT Plugin access to your Chat History?” I’m sure there are many other security questions that could be presented to the user BEFORE they authenticate the ChatGPT plugin, such as access to personal data. For example, if the AI Chat application has access to the user’s Google Calendar and “recognizes” the user is taking a business trip next week, the Chat app can proactively ping the user a reminder to pack for warm weather, in contrast to the user’s local weather.

Grass Roots, Industry Standards Body: Defining All Aspects of AI Chat Implementations

We don’t need another big tech mogul marching up to Washington to try and scare a committee of lawmakers into the benefits of defining and enforcing legal standardization, whatever that might be for some and not for others. One of the items that was suggested is capping the sizes of AI models with oversight for exceptions. This could cripple the AI Chat evolution.

Just like we’ve had an industry standards body on the OAuth definition for implementation, another cross-industry standards body can be formed to help define all aspects of an AI Chat Implementation, technology agnostic, to help put aside the proprietary nature.

In terms of industry standards for artificial intelligence, Chat standards, permissions for the chat app, and 3rd party plugins should be high on the list of items to invoke standards.

Extensions to AI Chat – Tools in Their Hands

Far more important than the size of the AI Chat Model may be the tools or integrations to the AI Chat that should be regulated/reviewed for implementation. The knowledge base of the Chat Model may be far less impactful than what you can do with that knowledge. Just like we see in many software products, they have an ecosystem of plugins that can be integrated into the main software product, such as within JIRA or Azure DevOps marketplaces. With relatively simple implementation, some plugins may be restricted for implementation. Many AI Chat applications’ extensibility requires manual coding to integrate APIs/Tools; however, assigned API keys can solve the same issue to limit the distribution of some AI Chat tools.

AI Chat “Plugins/Extensions” can vary from access to repositories and tools like SalesForce, DropBox, and many, many more. That’s on the private sector side. On the government sector side, AI Chat plugins can range, some of which may require classified access, but all stem from a marketplace of extensibility for the AI Chatbots. That’s the real power of these chatbots. It’s not necessarily the knowledge of cheating on a university term paper. Educators are already adapting to OpenAPI, ChatGPT. A recent article in the MIT Technology Review, explains how teachers who think generative AI could actually make learning better.

Grassroots, Industry Standards Bodies should be driving the technology standards, and not lawmakers, at least until these standards bodies could expose all facets of AI Chat. Standards may also spawn from other areas of AI such as image/object recognition, and not all items brought about during the discovery phase should necessarily be restrictive. Some standards may positively grow the capabilities of AI solutions.

Chat Reactive versus Proactive Dialogs

We are still predominantly in a phase of reactive chat, answering our questions regarding the infinite. Proactive dialogs will help us by interjecting at the right moments and assist us in our time of need, whether we recognize it or not. I believe this is the scary bit for many folks who are engaging in this technology. Mix proactive dialog capabilities with Chat Plugins/Extensions with N capabilities/tools, creating a recipe for challenges that can be put beyond our control.

Platform Independent AI Model for Images: AI Builder, Easily Utilized by 3rd Party Apps

With all the discourse on OpenAI’s ChatGPT and Natural language processing (NLP), I’d like to steer the conversation toward images/video and object recognition. This is another area in artificial intelligence primed for growth with many use cases. Arguably, it’s not as shocking, bending our society at its core, creating college papers with limited input, but Object Recognition can seem “magical.” AI object recognition may turn art into science, as easy as AI reading your palm to tell your future. AI object recognition will bring consumers more data points from which Augmented Reality (AR) overlays digital images within an analog world of tangible objects.

Microsoft’s AI Builder – Platform Independent

Microsoft’s Power Automate AI [model] Builder has the functionality to get us started on the journey of utilizing images, tagging them with objects we recognize, and then training the AI model to recognize objects in our “production” images. Microsoft provides tools to build AI [image] models (library of images with human, tagged objects) quickly and easily. How you leverage these AI models is the foundation of “future” applications. Some applications are already here, but not mass production. The necessary ingredient: taking away the proprietary building of AI models, such as in social media applications.

In many social media applications, users can tag faces in their images for various reasons, mostly who to share their content/images with. In most cases, images can also be tagged with a specific location. Each AI image/object model is proprietary and not shared between social media applications. If there was a standards body, an AI model could be created/maintained outside of the social media applications. Portable AI object recognition models with a wide array of applications that support it’s use, such as social media applications. Later on, we’ll discuss Microsoft’s AI Model builder, externalized from any one application, and because it’s Microsoft, it’s intuitive. 🙂

An industry standards body could collaborate and define what AI models look like their features, and most importantly, the portability formats. Then the industry, such as social media apps, can elect to adopt features that are and are not supported by their applications.

Use Cases for Detecting Objects in Images

Why doesn’t everyone have an AI model containing tagged objects within images and videos of the user’s design? Why indeed.

1 – Brands / Product Placement from Content Creators

Just about everyone today is a content creator, producing images and videos for their own personal and business social media feeds, Twitter, Instagram, Snap, Meta, YouTube, and TikTok, to name a few. AI models should be portable enough to integrate with social media applications where tags could be used to identify branded apparel, jewelry, appliances, etc. Tags could also contain metadata, allowing content consumers to follow tagged objects to a specified URL. Clicks and the promotion of products and services.

2 – Object Recognition for Face Detection

Has it all been done? Facebook/Meta, OneDrive, iCloud, and other services have already tried or are implementing some form of object detection in the photos you post. Each of these existing services implements object detection at some level:

  • Identify the faces in your photos, but need you to tag those faces and some “metadata” will be associated with these photos
  • Dynamically grouping/tagging all “Portrait” pictures of a specific individual or events from a specific day and location, like a family vacation.
  • Some image types, JPEGs, PNG, GIF, etc., allow you to add metadata to the files on your own, e.g. so you can search for pictures on the OS level of implementation.
3 – Operational Assistance through object recognition using AR
  • Constructing “complex” components in an assembly line where Augmented Reality (AR) can overlay the next step in assembly with the existing object to help transition the object to the next step in assembly.
  • Assistance putting together IKEA furniture, like the assembly line use case, but for home use.
  • Gaming, everything from Mario Kart Live to Light Saber duels against the infamous Darth Vader.
4 – Palm Reading and other Visual Analytics
  • Predictive weather patterns
5 – Visual Search through Search Engines and Proprietary Applications with Specific Knowledge Base Alignment
  • CoinSnap iPhone App scans both sides of the coin and then goes on to identify the coin, building a user’s collection.
  • Microsoft Bing’s Visual Search and Integration with MSFT Edge
  • Medical Applications, Leveraging AI, e.g., Image Models – Radiology
Radiology – Reading the Tea Leaves

Radiology builds a model of possible issues throughout the body. Creating images with specific types of fractures can empower the autodetection of any issues with the use of AI. If it was a non-proprietary model, radiologists worldwide could contribute to that AI model. The displacement of radiology jobs may inhibit the open non-proprietary nature of the use case, and the AI model may need to be built independently of open input from all radiologists.

Microsoft’s AI Builder – Detect Objects in Images

Microsoft’s AI model builder can help the user build models in minutes. Object Detection, Custom Model, Detect custom objects in images is the “template” you want to use to build a model to detect objects, e.g. people, cars, anything, rather quickly, and can enable users to add images (i.e. train model) to become a better model over time.

Many other AI Model types exist, such as Text Recognition within images. I suggest exploring the Azure AI Models list to fit your needs.

Current, Available Data Sources for Image Input

  • Current Device
  • SharePoint
  • Azure BLOB

Wish List for Data Sources w/Trigger Notifications

When a new image is uploaded into one of these data sources, a “trigger” can be activated to process the image with the AI Model and apply tags to the images.

  • ADT – video cam
  • DropBox
  • Google Drive
  • Instagram
  • Kodak (yeah, still around)
  • Meta/Facebook
  • OneDrive
  • Ring -video cam
  • Shutterfly
  • Twitter

Get Started: Power Automate, Premium Account

Login to Power Automate with your premium account, and select “AI Builder” menu, then the “Models” menu item. The top left part of the screen, select “New AI Model,” From the list of model types, select “Custom Model, Object Detection”Detect Custom Objects in Images.”

AI Builder - Custom Model
AI Builder – Custom Model

It’s a “Premium” feature of Power Automate, so you must have the Premium license. Select “Get Started”,. The first step is to “Select your model’s domain”, there are three choices, so I selected “Common Objects” to give me the broadest opportunity. Then select “Next”.

AI Builder - Custom Model - Domain
AI Builder – Custom Model – Domain

Next, you need to select all of the objects you want to identify in your images. For demonstration purposes, I added my family’s first names as my objects to train my model to identify in images.

AI Builder - Custom Model - Objects for Model
AI Builder – Custom Model – Objects for Model

Next, you need to “Add example images for your objects.” Microsoft’s guidance is “You need to add at least 15 images for each object you want to detect.” Current data sources include:

Add Images
AI Model – Add Images

I added the minimum recommended images, 15 per object, two objects, 30 images of my family, and random pics over the last year.

Once uploaded, you need to go through each image, draw a box around the image’s objects you want to tag, and then select the object tag.

Part 2 – Completing the Model and its App usage.

Transitioning from Agile, Iterative Project Initiatives to Production Support, DevOps

Going from dedicated project-funded efforts using Agile and Scrum methodologies, such as Sprint Planning, Backlog Refinement, Sprint Close Demos, etc., to a production support process leveraging the DevOps (Development and Operations) model requires a transition path to be successful.

People, Processes, and Technology need to shift along with this change in the Software Development Lifecycle (SDLC) mandated by management.

Commitment v. “Pulling from the Backlog” Mindset

Agile Teams Leveraging Scrum Ceremonies

At the foundation of Agile with the application of Scrum ceremonies is a commitment from the team and individuals on the team to implement User Stories within an agreed cadence, a Sprint (e.g., two weeks). The product owner and the implementation team articulate what is required to implement the story, produce a collective, relative effort estimate in the form of Story Points, and agree to complete the set of user stories within a given sprint cadence. For each user story, the “Definition of Done” is clearly articulated in the form of “Acceptance Criteria”, and this criteria is used as a guidepost for software development and quality assurance.

As an Agile, Scrum team, you may view your product backlog differently than you would in a DevOps, Development, and Operational model. Scrum teams are focused on the day-to-day work toward implementing user stories and making progress on user stories and Bugs to fulfill their commitments for the current Sprint. Scrum teams are typically focused on implementing work items or removing blockers and may discuss these activities each day during a Daily Scrum or “Daily Standup.”

DevOps – Pull from the Backlog

Unlike Scrum teams, who set up sessions to measure progress in a particular cadence, e.g., two-week sprints with Sprint Planning and Sprint Close sessions, DevOps team members pull from the backlog as their bandwidth becomes available. The [Business] Product Owner and the DevOps team may have some regular or ADHOC sessions for Backlog Grooming / Refinement to ensure the user stories are ready for implementation and prioritized appropriately.

The Product Owner and DevOps team periodically perform Backlog Refinement sessions to make sure the prioritized User Stories have all of the necessary elements to implement the user story. During these Product Backlog Refinement sessions, team members perform a relative effort estimation of each user story. How long will it take to implement the user story? Each team member may indicate how much effort they feel will be needed to implement the product backlog item (a.k.a. User Story). See articles on poker planning, a collective, relative, effort estimation process/tool that standardize how to perform these estimations.

When one of the DevOps team members has the bandwidth to take on one of the stories, they pull it off the backlog and move it to the Board for implementation. [Kanban] Boards have an agreed workflow to allow DevOps team members to move items through the agreed software development lifecycle (SDLC).

Production Critical Alerts Take Precedence

In this process, there is no commitment or agreement when the team member will finish their work on the user story, i.e., complete by Sprint Close cadence. Story Points, or Product Backlog, Size Estimation give the individual and the team an indication of how long the Product Backlog Item (PBI) might take to implement. Unfortunately, the (Development / Operations) DevOps team member’s responsibility stretches beyond “new” work from the Product Backlog. Operations duties, such as reacting to critical application monitoring alerts from the production environment, may take higher precedence.

Where Am I?!?

DevOps team members may have frequent disruptions in their work from production issues and have their heads spinning, switching back and forth from implementing PBIs to handling ADHOC issues from production applications. The Kanban board is one way to get everyone back on the same page with the changes in progress. At a glance, we can visualize the progress of implementing user stories, bugs, and associated tasks on the Kanban board.

Kanban board
Kanban board

Anatomy of a Great Kanban Board

Moving from a Scrum team to a DevOps team, you may, as an individual, be looking at the Kanban board from time to time, such as when you have bandwidth available to work on Bugs or Product Owner prioritized User Stories. The following does not assume your project team transitioned to a DevOps model or a separate DevOps team took over,

Columns Match your DevOps, SDLC Workflow

Regardless of who is doing the work, how the work is being done moving forward is essential to map out the software development lifecycle (SDLC) under DevOps constraints. The DevOps team will establish the states for each work item as they apply to the DevOps team. For example, there may not be QA team members, but there would be a testing process to verify the implementation of a Bug fix or User Story.

However, in a Scrum team going from “Dev Complete” to “Testing Complete” may require a “Release Management” phase, i.e. promoting code from DEV to TEST environments. On a Scrum team, between “Dev Complete” and “Testing Complete”, there may have been a phase to run a cursory or “smoke test” before going to “QA Approved.” This alternate DevOps SDLC process may not require a smoke test anymore due to the team’s composition. Long story short, it’s essential to get your process agreed to and implemented on the DevOps team Kanban board. Each column has a state, and the idea is to move Product Backlog Items (PBIs) from left to right and terminate at the “Closed” status.

Identifying and Removing Blockers

It’s all about keeping the momentum forward. If we cannot work on a Bug or User Story because we are Blocked for any reason, that is time wasted without progress. As a team, we should always be on the lookout for Blocking Issues that prevent our teammates or us from moving forward. Once identified, we aggressively look for ways to unblock ourselves or our teammates. The Kanban Board typically has a “Blocked” status column, so it’s very visible to the team once the PBI is indicated to be Blocked. Of course, the “Blocked” identification and remediation process is not limited to DevOps or Scrum teams.

The HOV Lane for Critical Production Issues

In some cases, changes to production code or configuration need to be dealt with by the DevOps team. These production issues that require “priority treatment”, e.g. Severity = Critical, may go in a “swimlane” on the Kanban board, which clearly articulates these Product Backlog Items (PBIs) are the top priority for the team (see figure above).

Definition of Done – Acceptance Criteria

As in Scrum ceremonies, the “Definition of Done” should be clearly articulated in the PBIs (i.e. user stories and Bugs). Sometimes the Definition of Done fits well in the “Acceptance Criteria” field of the PBI, i.e. these are the following things that need to appear in the code or surface on the UI to be accepted as “Closed” or “Done”.

Work in Progress (WIP) Limits

On some teams, there is a concern about “workflow blockage” at any given state in the SDLC process. For example, there could be 20 PBIs in the “In Progress” state for three DevOps team members. This could be identified as excessive and trying to do too much simultaneously. It also may contribute to confusion on the current state of any given work item. Some Kanban Board tools allow you to apply WIP limits so you cannot add more work items to a given status on the board. It also could be done using a standard paper Kanban board.

Product Documentation

If two separate teams are transitioning the work, documentation may be vital in the successful transition and ongoing product maintenance. Many agile teams are lighter on documentation and trust the product speaks for itself. Best case, user stories have been created that cover the team producing/updating a functional specification doc and a wireframe collection. The most probable situation is we have a pristine set of Features and associated User Stories. Each of the User Stories clearly articulates a description and, most importantly, “Acceptance Criteria.” that may be used for the development and validation of the functionality of the system. User Stories can be derived for knowledge transfer documentation.

Always Room to Improve – Retrospective

Although a Retrospective session is typically attributed to a Scrum ceremony, you don’t have to be engaged in Scrum activities to perform a retrospective. Depending upon the DevOps Team composition, it could be a collective, grassroots suggestion, or the team DevOps manager can recommend and facilitate the session. It would be better if a team peer fulfills the role of facilitator, and some retrospective tools allow anonymous feedback.

Good luck on your journey, and if you have any questions, please reach out.

“Must Have” Power Automate Enhancements for Solution Maturity

My aspirations at commercializing “externally” facing, manually executed workflows using the MS Flow SDK are at an impasse. There are several key changes required to move forward, listed below.

In addition to removing my “Blockers” for leveraging the MS Flow SDK, I have a “wish list” below that contains features that would enhance the overall Power Automate solution.

“My Flows” Folder and Tag Hierarchy

I asked for this feature from Day #1 when MS Flow was released. Organizing your Flows within a flat, hierarchical structure is difficult. Users cannot create folders and organize their content.

Organizing Flows
  • The user should be able to create folders and put Flows in them.
  • Attach “Tags” to each of the Flows created. implement another view of “My Flows,” and group Flows based on Tags with the ability to show a single Flow within multiple Tag views.

Version Control

The user should have the ability to iterate saving workflows, compare versions of workflows, and revert to a previous workflow version.

Security Model Enhancements – Sharing Workflows

Implement execution permissions without the ability to manage or read any other workflow information. Introduce the concept of a two-tiered security model:

  • Admin user, the current view for all Power Automate users
  • Introduce a user profile/security group, execute only specific workflows where explicitly granted permissions to a Flow.
  • Eliminate the need for another Power Automate account only being used for shared workflows to execute.
  • You should only need one account with a Premium Connector $ license if only using the secondary account for execution.

Microsoft Power Automate SDK to Build Externalized Applications

When I first started looking at Microsoft Flow, the previous name of Microsoft Power Automate, I recognized the high value and potential uses within my organization. Almost innumerable Connectors to 3rd party applications, “sensing” / triggers for many of the applications participating. Huge potential, and won’t “break the bank” with a 15 USD price per user/month which contains all of the “Premium Connectors.”

I started automating processes for both personal and business. I upped my social media game, for example, sending a mobile notification to myself when there was a potentially interesting tweet. Or, if there was an RSS feed containing keywords I was monitoring, I sent myself an email with the news article. My client had needs around Microsoft Azure DevOps (ADO) that it was not capable of doing “out of the box,” so I took on those automated workflows with ease.

Venture using New Business Model with msflowsdk

Then I thought it would be great to commercialize some of these workflows. However, there were several technical limitations I came to realize. First, to execute one of these workflows manually, you would have to execute it from within the MS Power Automate web or mobile application. The user must be logged in with your Power Automate credentials to execute “your” workflow manually. As a Power Automate user, you could “Share” Power Automate flows with other Power Automate users. Unfortunately, that would require your web app customers to have Power Automate accounts paying as much as 15 USD per month. We would have to think in terms of a generic “Production” application user, potentially shared with all external, commercial users.

Providing Custom Interface using HTML and JavaScript

Then I realized there was one way to present the Power Automate Flows without the Power Automate Web UI or the Power Automate iPhone app, and that would be to use the MS Flow SDK to build HTML and JavaScript Web applications. Unfortunately, you would still have to log in with your Power Automate user, that has access to the flow, but the User Interface was highly customizable by using MS Flow SDK.

Using “Generic” Test User for MS Power Automate

  • Limit the user’s access who does have access to your Microsoft Power Automate workflows. As micro as possible to granularize the permissions, such as execute XYZ Power Automate Flow without permission to read/see ALL the Power Automate Workflows, At the moment, that doesn’t seem possible (TBD). I need to recheck in Azure Portal and client app registration.
  • Does my approach to commercializing MS Power Automate apps even supported from a Microsoft business perspective? I don’t know yet. I read the article: Types of Power Automate licenses and need to reread this document.
  • Need the ability to grant “Execute” access to specific MS Power Automate workflow users without the ability to create or read any workflows of their own, limited Power Automate, User License?

Create an Azure AD User For Each Customer

  • Technical seamless implementation would be required to add Azure AD users who have paid a commercial fee for the Web app powered by Power Automate, or I embed advertisements into the Azure, Power Automate, Custom Web App.

The Experiment – SMS Delay

Wouldn’t it be fun to send a text message with a delay, enter a text message, and parameterize the delay in N minutes? How fast could I write the app across multiple platforms, desktop, and mobile? The backend and mid-tier would probably be the longest aspect of the development of this app. You probably need to put it in a responsive Web App to resize it to fit the platform. But the N tiers of the stack, how fast can I develop that? Less than an hour using Power Automate.

Power Automate Workflow

This Power Automate Workflow has three steps in the workflow: the “Manual Trigger,” the “Delay,” and leveraging the Twilio Action – Send Text Message (SMS), which happens not to be a “Premium” connector.

Power Automate: SMS-Delay
Power Automate: SMS-Delay

Front End Code to Integrate

With a combination of HTML, JavaScript, and the MS Flow SDK, I was able to put the SMS-Delay app together rather swiftly, including everything from Azure Authentication to my app to execution.

SMS Delay UI
SMS Delay UI

Give SMS-Delay a Try

Would you like to try out this Power Automate manual workflow? Please provide ANY login you would like to use for Azure AD authentication, and the user must have access to Microsoft Power Automate FREE license. Once you provide the user name to me, I will update Azure AD to include your permissions to the app and then send you a note to give the app a try: SMS Delay Application (rosemansolutions.com)

To Be Continued

For the next steps, I’d like to…

  • Publish the project HTML and JavaScript code I used to create this app
  • Solve the riddle of the Power Automate authentication
  • Create this and many other applications using the MS Flow SDK

Anonymous Authentication or Limited Authentication

Limiting the authentication, using very granular controls of Power Automate which may or may not yet be implemented. Have a limited Power Automate user with grant permissions ONLY to execute a specific workflow.

Is it possible to execute a Power Automate workflow with anonymous credentials and not necessarily have a Power Automate user account?

Digital Download: Content and License Transfer – Business Model In Jeopardy!

GameStop reminds me of Redbox and Netflix facing business model decimation as we transitioned from DVDs and Blu-Ray to streaming digital content. No more physical medium to borrow/rent, just streaming data from massive content libraries. Netflix pivoted early on and became a survivor and thriving revised business model.

GameStop’s pre-owned buy-and-sell business model is in jeopardy and has been for some time now. All of the major game consoles provide users with purchasing via digital download. There is no way to transfer that digital content and license purchase to anyone else. If there was a way to transfer the digital content and associated license for a game, maybe GameStop’s pre-owned business model might thrive again.

Securely Transfer Digital Content and License

There are several possibilities for implementing this transfer. One opportunity could be leveraging a large-capacity SD card, and the software on the console can push the digitally downloaded game onto the SD card along with the correlated license. The opposite should also be true. Pop in the SD card with a loaded game and license and that content could be transferred to any console of the same manufacturer.

Is this a job for Blockchain?

It should put software game designers at ease, leveraging several design features of Blockchain. Blockchain architecture would guarantee ownership, the uniqueness of a digital license, and associated digital game ownership. The content could be stored in the cloud, similar to NFT art and video content. This should NOT be confused with using ETH to purchase NFTs with cryptocurrency. In this scenario, we would exchange the SD card medium with Blockchain architecture.

So, why is this not implemented already?

The game console manufacturers don’t profit from trading and selling pre-owned games, so there is no push. GameStop should be leading the charge on this endeavor, offering to implement this module in all major gaming systems or outsourcing its implementation. Worst case, form an ADHOC committee to derive standards for implementing this module. The game console manufacturer market is a monopoly or, at a minimum, an oligopoly. Can anti-trust legislation be applied here to Microsoft Xbox, Sony Playstation, and Nintendo Switch?

Online, Gaming as a Service (GaaS) – Not Applicable

To state the obvious, online gaming or Gaming as a Service (GaaS) business models charging monthly or annual fees to access their game service do not apply to the one-time purchase of the game where the customer owns “the game.”

Azure DevOps “Out of the Box” – Getting Started with Customizations

New to Azure DevOps? Here are a few customization recommendations you can make with minimal experience and deliver maximum value. User Stories are an essential part of delivering using agile methodologies, and Azure DevOps provides a basic template for creating a User Story, such as title, description, and acceptance criteria. However, there are a few additional fields the author of user stories can capture to maximize their agile journey such as MoSCoW priority, Precedence, and Size Estimate to name a few.

In addition, there is a Marketplace (i.e. Library) of Azure DevOps Extensions that can enhance your user’s DevOps experience. The post will cover the recommended extensions to apply to “Out of the Box” implementations of Azure DevOps.

Azure DevOps “Process” Updates: New Fields

Adding Fields to a User Story is very simple, as long as you have access to do so. Upon opening your Azure DevOps (ADO) project, select “Project Settings”, and the “Project details” page should appear. Select the “Process” defined for that project, e.g. “Scrum”. Depending upon which Process type selected, “Scrum” or “Agile”, you will see “Product Backlog Item” or “User Story”. Both may be used interchangabily. Note that only “inherited” processes can be modified by “Project Collection Administrators” group.

Process Change: Work Item Types
Process Change: Work Item Types

A list of Work Item Types appear. Select “User Story” or “Product Backlog Item”. The Layout of the work item will be displayed. Now you are able to add fields, by selecting the “New Field” button.

User Story – MoSCoW for MVP

For a Minimum Viable Product (MVP), where is the line drawn to get the product “out the door”? Here is a methodology called MoSCoW, self-explanatory, in which the capitalization is important and stands for:

  • “Must Have” – we aren’t going to production without it
  • Should Have” – borderline must have but could fall off the MVP list if there is pressure to reduce scope to meet timelines, for example.
  • Could Have” – a story identified but not prioritized in the currently targeted MVP.
  • “Won’t Have” – identified and then forgotten. It will never reach prod.

User Story – Precedence (Prioritization)

Reminiscent of the original BASIC programming language, using 10, 20, 30, etc., line numbers for execution sequence. In addition, like in BASIC, implement precedence by 10s, so there is room later on to fit in additional work items.

Priority within the Sprint for a given team member

How should someone on the implementation team prioritize their work? Especially important if the team runs out of time for a sprint and only produces the highest business or technology value first.

Priority within a Sprint for all team members

Collectively, as input from the product owner or team tech lead, the most important work items to deliver within a sprint.

User Story: Size Estimate (paired with Story Points)

Relative, standard, effort estimations are essential that everyone on the implementation team is “on the same page.” to sizing the user stories. Although “Story Points” is “Out of the Box” for User Stories, a “Size Estimate” field is not. Relative effort estimations I’ve used before are Tee Shirt sizes (X-small, small, medium, large, X-large), and can be correlated to Story Points to attempt to quantify the effort in days.

User Story: Lead Developer

A custom “Lead Developer” field is valuable for quickly identifying who performed the work. The current “Assigned To” person may not be the developer who implemented the User Story. Most likely, it’s a QA tester or the Product Owner for Accepting Stories.

This could be helpful if you want to track each developer’s progress either by the SUM of Story Points or the COUNT of Stories.

Risks to Compliment Issues

If you’re tracking “Issues,” an “Out of the Box” Azure DevOps work item, then why not add a custom object in the “Process” section called “Risk” and any fields you would like to track with that custom RIsk object?

Azure DevOps Extensions

Analytics

Created by Microsoft, this extension may or may not already be rolled into the core Azure DevOps product. It’s ideal if you want to externalize in-depth reporting using Microsoft Power BI.

Open in Excel

Created by Microsoft DevLabs, this extension may or may not already be rolled into the core Azure DevOps product.

Azure DevOps Office® Integration 2019

The best tool for importing and exporting work items from Azure DevOps to and from MS Excel. It can be downloaded here.

Delivery Plans

Created by Microsoft, this extension may or may not already be rolled into the core Azure DevOps product. It’s the closest I’ve seen (for free) with a graphic depiction of delivery timeframes in a Gantt-like chart. You can’t print or export it, which is a massive inhibitor to sharing your timelines with stakeholders outside the ADO universe.

Estimate

Created by Microsoft DevLabs, this extension may or may not already be rolled into the core Azure DevOps product. It’s Planning Poker in Azure Boards. I enjoy Planning Poker, but this integration may be more convenient because it can save the Story Point values directly to the User Stories. Also, note some corporate environments BLOCK “Planning Poker” on the firewall due to the words in the URL.

Feature timeline and Epic Roadmap

This Azure DevOps extension by Microsoft DevLabs is a close 2nd to the “Delivery Plans” visualization of deliverables. Again, no export or print capabilities.

Retrospectives

This extension is a “Must Have” for all teams leveraging the Scrum Retrospectives session. This extension, built by Microsoft DevLabs, is highly configurable and is ideal for remote teams unable to perform this activity in person.