Sunday, November 26, 2017

Understanding three use case types in process modeling - not just system or business

Use cases give us a detailed understanding of a process to optimize or build into software. When use cases go bad, and they do, they take up space and no one uses them. I've been finding out as I teach them more and more, the main reason people don't use use cases is for lack of detail. So I've built more and more useful detail into the templates that I use for my classes.

Details, details

Let's take just one small entry in the metadata of the use case template: the type of the use case. The type should tell you about the context of the use case. If your analysts understand the context, there's no need to include this in the documentation. But analysts are usually given the context of a software system to be built and the context of the project that will be driving that software to be delivered and requirements come from all angles.

I was told there were two types of use cases: system and business. (I think there's a different set but that's coming.) Not enough examples of detailed business use cases with sequential tasks listed in the course of events are being produced. Maybe it's because we don't understand how to do it.

One solution to making your job more tolerable and eliminating those confusing and bothersome requirements is to ignore the complex non-system use cases. Another simplifying solution is to just grab the large scope names of the business use cases and not worry about the details. Then there's the solution that says if it's important enough to know the details of the software workflow, the business process we're planning to automate, why isn't it important to know the details of the business workflows also? Let's do that. The devil is in the details.

Two traditional types

The first and major type of use case traditionally is the system type when we design interactions around software. The trigger to the process is when someone or something chooses something from the main menu or a system hits our API. Sometimes, that main menu is a log-in part of the process that protects the value of the use case by keeping unwanted people out and those that we give access to the rights to initiate the use case.

Then there's the business type, which from a lack of detailed examples, no one does. I think the best work on doing enterprise analysis to show how UML can be used to draw diagrams that the business doesn't want to use is Eriksson and Penker's Business Modeling with UML - Business Patterns at Work (2000). These Swedes have a beautiful system that technical people can admire for completeness, details, and symbols that look just like software diagrams. But it's the business people that need to be drawing, updating, and using these diagrams and they sure don't want to spend time with UML to do that.

One of my pet peeves is the UML way to draw a business use case. The use case symbol is the ellipse, also known as an oval for the non-technical crew out there. You put the name of the use case in there leaving little space for anything else. Then you draw a diagonal line inside the oval making it look like you had a masking problem with another symbol. And if you followed UML to put the extension points in there as well, you'd have an unreadable ellipse. Let's just use a stereotype to say <<business>> so that the category is clear when there's no system boundary box in sight.

Even Erikkson and Penker don't get down to the level of detail that is important to start with on a business process. But they do a bang up job on high-level structures. The level of detail that is necessary for a system use case to be useful to build for the developer is fine-grained down to a task level that needs no other explanation.

Maybe we don't think like Swedes here in America and don't take the time at a high-level to plan our goals. We think the market is moving much too fast for that and like to do shorter-term planning. Do we see the Swedes competing well in any American market other than build-it-yourself furniture? Like how many people do you know that own a Volvo?

A pure start

Moving on to how I see the process types being divided up, they start with a set of tasks performed by a human. That is what I would call a pure business use case. This business use case 
  • starts when a trigger sets it off, 
  • moves through a set of sequential tasks,
  • ends with significant value for the business and 
  • probably has documents created to support it. 
There's value in promoting merchandise, there's value in making a sale, there's value in fulfilling a sale, and there's value in getting payment for a sale. There's even value in accepting a return for a purchased item.

There's no talk of automation in the pure business use case and if someone wants to use a tool, then that is part of the design for fulfilling the process tasks. That's what increases efficiency. The task sequence described in the pure business use case is what increases effectiveness. Both can increase value for the business.

System use cases made easy

I encourage my students to write their system use cases as pure business use cases that will be automated by the programmers. That clarifies what types of assumptions they have made about what they will be building. What if we eliminated the design constraints and changed all those "system displays a main menu" types of tasks into a business pure "system prompts for what to do"? Then the system use case turns into a pure business use case and the system role becomes a business role of some sort. I think it's better to think about how a person might handle the system functionality when writing use cases.

But this next type of use case pops up from every single student that I've ever taught and one that lets design elements creep into the use case tasks. It's expected in many industries because of what they already have accepted before the project began or in other words, elicited as a constraint. So this is called a system with constraints type.

Students always want to "submit" the data that is collected. They also want to use buttons, keyboards, mice, screens, menus and every other standard GUI option they think is normal. But in doing so, the tasks are now explicitly describing a design that must be used. When a web site is being designed, the design constraint of using a set of web languages and a web server has already been decided. Why should we follow that architecture if the requirements don't really point us in that direction?

So, not to fight the common usage of this type of use case in business, it's just easier to put a field at the top of the form to capture what the design constraints are that have already been decided to use before we capture the requirements. The requirements then have the freedom to talk about all those clicks and hovers they want. But let's call it a system process with constraints.

Distributed tasks

The last type of use case is the distributed use case, a mix of business and system tasks or different systems, and one that I've seen no documentation on. There's no UML approved designs for it and no discussion about it that I know of. But as soon as I started to do business use cases for a real scenario, it stuck its oddly shaped head up and stopped me from using the other two. I could have changed the business process to manage, but the sequence of activity is so common, it has to be dealt with by changing the way we do the use case.

Another use for the distributed use case is when you have real distributed processes that are seen as a complete goal and within the scope of development. An example might be when microservices or API calls provide less than a goal value but are integrated in working with a web page to deliver the end result. I do have a problem with creating a use case just to validate an address on the server because that is less than a business goal that a use case should end with.

The distributed use case does a step or two in one system, existing or to be developed, and then does a business activity or a different system's task, like talking to a person to acquire information. Then it cycles. The process is distributed over two activity flows. Think of ordering at a fast-food restaurant. There the actor initiates the food order and uses an automated system for the customer as an agent or proxy because it is too complex. The process goes like this: the actor-agent
  1. greets the customer (business task)
  2. opens a new order (system task)
  3. asks the customer for their item to order (business task)
  4. selects the item, enters the quantity, checks for additional questions or restrictions (system task)
  5. gets needed data from the customer (business task)
  6. enters extra information (system task)
  7. informs the customer of information (business task)
  8. confirms end of order with customer (business task)
  9. sends order to be processed (system task)
  10. tells customer the total of the order, how to pick up their order and says goodbye
An extension point for optional multiple item order which is clumsily handled by trying to show a loop is better handled by adding the extension point at #8 and sending them back to #3 as an option for the extension "Add another item to order."

Now the problem is how to diagram that. I propose using two ellipses with  <<system>> and <<business>> stereotypes and showing the dependency of the system to the business use case with an <<include>>. The business use case with acknowledge its dependency with a use case name that has a system usage in it. Here's how it would look:

The diagram also shows that it is possible to do an entirely business use case to take an order and then later, do an entirely system use case to create the order.

Course of events layout

You might think this distribution would cause some confusion in the course of events. There are some issues. But the first thing to do is to label the actor correctly, the one whose only job is to initiate the use case, and then add those other supporting roles, aka "supporting actors", as the customer and the system.

Maybe a two column layout would be better to separate what the business use case has different from the steps in the system use case. That way you wouldn't have to duplicate the system use case as a distinct use case when it copies the steps from the business use case. Or just make sure you put what use cases are included at the top in the metadata and manually keep them in sync.

But for now, this reorganization of use case types massively helps my ability to model real life business use cases and I hope helps those who would like to document their business processes. Having an understanding that the use case starts with pure business functionality clarifies what a basic use case is about. Then as we automate some portions or all of it, the type of the use case changes and clarifies how to model it.

Thursday, May 25, 2017

A static new web site - Zurb's Foundation with Panini

I've been meaning to update my personal web site over the last ten years but just never got around to it. The web evolved. My site has been through basic html, custom php, custom JSP and Java filters, and one attempt at using a CSS framework called Semantic UI. The defining moment of need for change was when my web host was attacked by hackers in a DDOS attack lasting over a week.


It wasn't something I could spend a lot of time maintaining so spinning up Spring, WordPress or another unreasonably complex framework that I didn't need wasn't the answer. I had several basic requirements:

  1. Very low and easy maintenance
  2. High data use for link pages
  3. Cutting edge technology I can teach
    • CSS transpiler
    • JavaScript transpiler if possible
    • CSS/JS framework
A few requirements were worth noting as negative requirements:
  • No need for security
  • No need for tracking state
  • No need to collect data


The client-side frameworks getting all the buzz like Google's Angular and Facebook's React were slanted towards developing mobile sites and that was not where I was going since it adds so much complexity to the site. I had thought more along the lines of a serverless architecture but it looks like that is more bleeding edge. But it was simple. Or it will be at least in a few years.

I love that the WebAssembly team is moving along with a new game demo and no need to use Google Canary, but will have to face committees and meetings for probably another five to fifty years. But the end product will be a compiled client for browsers based on client-side frameworks just beginning to explore that territory.

The other framework style I decided to investigate was the static site generator. I could have static data files, static pages in template form and generate the pages as I had updates. No server side language. It's the perfect development workflow when you have no need for complex forms which is typical of a marketing site or a blog.

Jekyll was the big contender but I didn't want to deal with Ruby. Hexo seemed like a possibility but it uses ejs templating which seems so dated. I liked mustache styled templates after teaching and trying all sorts of templating styles. But I was running in circles investigating the various popular frameworks at StaticGen which ranks them and allows me to compare them easily, No solution yet.

So the next strategy was to ignore that and look at my CSS framework. It was going to be either Twitter Bootstrap or Zurb Foundation. I've had enough problems in class with getting around Bootstrap's site and using a simple template that it wasn't my first choice. And the fact that it's harder to read and has fewer JavaScript components than Foundation was why I decided on Zurb.


The thing I never noticed was that Zurb's developers use their own static site generator (they say flat file generator) called Panini with Foundation to build all their web sites. I like anyone who eats their own dog food. It's simple and uses Handlebars templating, SCSS, and either JSON or YAML formats for data. I installed it with Foundation's CLI and after two days of learning Handlebars and YAML, I was hooked. The Foundation CSS and JavaScript isn't that important to structure the site and will be added more in the future.

My pages are just simple combinations of cards for now. On a partial (included) page component, I have two Handlebars loops producing multiple cards with multiple links on them. Logic in Handlebars lets me work with data optionally. 

The responsive flow of the HTML5 multiple columns is simple and not based on any grid. I'll be refactoring the style attributes out of there eventually so I can use better SCSS style management and using much more of the CSS from Foundation but for now the default styling is adequate.

The partial page that holds the card component is only just a shell that declares the right data file and itself is included in a basic page layout template. The data is inserted when I build the file. Panini spins up a development server with BrowserSync to update the pages as I change the html (but not the data so far, I think I can tweak that though. Update - Kevin Ball of Zurb made the change to the Gulp file which takes care of this. Thanks for being very responsive!)

And the YAML data is easily understood, updated, and the schema customized just like a NoSQL style database. It has minimal delimiters and uses indentation for nesting. If you are off by one space, though, you will get some odd error messages when you try to view the page.

The final Gulp build combines all the CSS in one file, and all the JavaScript in one file so there will be minimal downloads and also does image compression.

It's a slick set of tools made as easy as possible for the flexibility that you get. I might be tacking on some JavaScript code to log sites that don't respond or collect other data especially from forms. I'd have to combine this with my local site and regenerate it again. But for now, it's fun again. Hopefully, this will last another ten years.

Monday, October 31, 2016

Lunch Seminar: Thu Nov 3 - Is Angular the future of web development? WebAssembly is coming.

If you are interested in my last blog, Is Angular the future of web development? WebAssembly is coming, then you might be interested in my presentation of the architectural topic with more detail at my school here in Kansas City, Centriq Training. Sign up at the event site.

The talk lasts about 2 hours and a free lunch is provided. I think strategic technical planners will get lots of value out of the overview.

Friday, September 23, 2016

Is Angular the future of web development? WebAssembly is coming.

Web development has passed through three major architecture patterns. And a fourth is on the way which will add more choices for designers and keep developers busy as they blend the types both by design and confusion.

Google's Angular framework is an example of the latest style of architecture and with the release of Angular 2.0 has demonstrated that the Single Page Application is getting easier to develop. But it's not just another framework. It's on the architectural path that the web is taking in general. The question I want to address is whether or not Angular will be a long-term solution for the web. In doing so, the choices will become clearer how other styles are appropriate or not based on business needs.

The early server side web site

The 1.0 web, a term used after the web had evolved some, was the basic setup where the developer created pages in a text editor, loaded them on the web server, and a browser requested them for viewing. That's it at a high level. There were millions of documents that were available but no search engine like we have today. Directories indexed files like librarians cataloged books. Most of the cutting edge activity occurred through the 1990's but business and other people learned how to make static web pages well into the 2000's.

Web 1.0 - Development workflow for static web pages

As web servers evolved to handle more and more processing besides just delivering pages to viewers, form processing, state management, and templating systems increased in complexity. Eventually, the server became the point where pages were built using a favored language injecting data stored in a data base. .NET, Java, and especially php were the dominant processing languages for creating a dynamic web site refreshing the page for every viewer and every request.

Web 2.0 - Dynamic web site with server side page generation and client side JavaScript UI

This style I associate with the Web 2.0 name, has been the driving enterprise architecture since the 2000's and continues to dominate page generation for most corporate web sites. The web server has enlarged its scope of functionality to add features that caused the name change from web server to application server on the Java side. The Microsoft side just adds features. And now nginx is becoming the dominant web server, taking over Apache's httpd server lead as the top dog for handling web requests, due to its speed and low or no cost.

The web 1.0 returns

The architecture where the web server does no web page processing may seem like a historical starting point of design but it's really a better solution for achieving the fastest possible speed for a page request. Other use cases emerged in the last few years that put typical Web 2.0 server processing operations in the development workflow instead of the on-request process occurring during the page request.

For instance, if you have no or little updating to do on your web site, say once a month like many blogs need to do, you don't need to provide an up to the minute current state of the web page provided by processing out the page on every request. The hack that added complexity to the design to overcome that problem was caching but it has issues, so much so, that the recent HTML5 caching proposal has been shelved.

The easier solution is to create the pages in the development phase once or as needed by the business, upload them to the server and not do any page building on the server. This is called static site generation and can be achieved will any language. Templates, JavaScript, CSS, and any kind of technology can be used and the user will only get the fastest possible delivery.

Web 1.0 - Development workflow for static site generation

Other types of non-server processing workflows in the development cycle can be cobbled together with tools to advance the poor state of affairs with the old technologies of CSS and JavaScript. Recent advances to help build better, faster, more scalable code have resulted in multiple languages to transpile, or convert into another language, new languages like CoffeeScript, Babel, and especially TypeScript to JavaScript. LESS, SCSS and SASS are the leading languages to transpile into CSS providing an easier set of style rules to update and manage.

Web 1.0 - Development workflow for CSS

Web 1.0 - Development workflow for JavaScript

My personal favorite tool I use in my development classes for these types of enhancements is Microsoft's Visual Studio Code due to its great support for TypeScript and extensions that keep me from making a patchwork of grunt script calls to JavaScript packages. But JetBrains' WebStorm might be my pick if I were a paid programmer because of the amazing feature set it has.

The mobile 3.0 web app

So as the browser expanded to handle the needs of providing the user with a better experience, the renderer sped up, and the JavaScript engine sped up immensely, making game developers much happier. JavaScript's ability to process so much so fast enabled the UI of a web site to support a full application and add more features like the web server did. Mobile phones and tablets were trying to provide the same experience but with less bandwidth so the round trip to the server to pick up resources was not a good idea.

Web 3.0 - Web app with client side execution using scripts

The transfer of resources and the processing of them on the browser was the ideal and then managing all the operations of an application except for supplemental data access and security became the goal. Even those services are outsourced leaving the application with the ability to be wrapped in a hybrid application shell like Ionic so it could be deployed using a marketplace service from Apple or Google. Google recognized the need for outsourcing the basic non-client services and has been putting a lot of effort into their Firebase project.

The UI operations handled previously by a proprietary technology like a Java applet, Silverlight, or Flash, now with the demise of  those non-stardard add-ons, have to be managed with the only possible solution on the browser of JavaScript. But our ability to do full fledged client-based applications is increasing as we are able to produce better code for application building libraries.

Angular is that new kind of application framework that has finally figured out a blended solution for how to take bundled resources of HTML, CSS, and JavaScript and manage the user interaction on the browser so that no server side page building needs to occur. It uses a library of components to interact with for the app that you build into a browser's web page. Now it's a web app and not a web site. The routing, controller logic, data binding, and templating are all features of its client side framework that creates the single page application.

Execution environment maturity

An execution environment is anywhere where code runs under a controlled context. for example, the database server has stored procedures, the mainframe has COBOL apps, the midrange computers have RPG programs, PCs have Windows and Macintosh apps, and phones and tablets have proprietary apps. Even embedded systems are under the control of a type of small to large chassis that takes care of the computing such as an ATM machine, a cash register, or a flight recorder. We're seeing the flourishing of familiar development environments Linux and Windows for Arduino and Raspberry Pi single board processors.

But the systems evolved as did other execution environments. Code structures all started with a basic script. Then the code became structured with reusable functions. And then libraries evolved to package up groups of cohesive operations. And then those scripts became faster by compiling and building the code into a machine readable format. This is the maturity cycle that has happened and is happening with web languages.

The browser is an embedded execution environment. It runs under another execution environment but by a set of standards that allows for the same results to be expected from any OS and any brand of browser. The standards allow us to see the browser as a viable execution environment to code to. It's even possible to provide the browser execution environment without an OS as we've seen with the introduction of the Chromebook.

The final stage of web programming

So with the emergence of client side programming on the web, we have begun the full acceptance of the web browser as an execution environment which supports the mobile crowd well. We will see the maturity of the Web Components ideas both supported by and combined with JavaScript frameworks for many years to come. Polymer is an excellent example of this. The Angular framework now looks like a jQuery UI plug in framework for browser apps and will continue to grow with more "plug-in" libraries like the current Angular Material project.

But the final 4.0 step will be when we convert our JavaScript code into compiled code. This is now on the blackboard. It's called WebAssembly. We will keep our browser JavaScript APIs only they will be available to us in every major language like C#, Java, VB, php, and of course, JavaScript. Then we will compile our code down to the WebAssembly format and load it on the browser. All major browsers have committed to supporting this standard so they will add an engine to execute the code replacing the JavaScript engine's functionality increasing the speed of our apps around 10 times.

Web 4.0 - Web app with client side execution using compiled code

The structures of client-side development that we are designing will all be stable and useful but the code will be done in Visual Studio or Eclipse and new IDEs. I am guessing that the ability to start doing this kind of work will be in another four years or about 2020.

Angular's future

So, does this mean that Angular will fade away? Not likely since it's been the trailblazer for a successful framework in JavaScript. I expect to see clones in most languages for WebAssembly like Jangular and Nangular as they might be called. But the dominance of JavaScript as a primary language, the need for TypeScript, and the requirement to know the JavaScript set of development tools are all going to be less important in the near future.

I'm excited for the future of the web. It's not the web as we knew it. It's a new development environment that will influence other environments such as television and other places where we might see an embedded browser execution environment. Maybe we'll see new types of hardware environments emerging that tailor to VR instead of just putting the browser to poor use on our web-enabled refrigerator.

Whatever it is going to be, we can be assured that the browser has blossomed fully as a development environment and the rate of design change and churn of projects related to that will slow considerably. The design choices are stable, the languages are looking stable and it's time to mature our processes surrounding the use of the web.

Monday, August 29, 2016

Complete testability - analysis quality essentials for the confused

Requirements are usually described as having an intertwined bunch of qualities we aspire to have. In reality, it ends up like a person admonishing you to lead a good life. Engineers took a lot of time listing these qualities out in documents like the IEEE Standard 830-1998 superseded by the more impressive 29148-2011 defining qualities like

  • measurable
  • testable
  • consistent
  • correct
  • complete
  • unambiguous
  • ranked
  • modifiable
  • traceable
  • feasible
  • independent
  • necessary
  • non-redundant
  • terse
  • understandable
Most people don't want to take the time to assess a list of qualities like those on a set of a thousand requirement statements, nor do they think it has any value. They're right but they probably don't know why. They want their requirements to lead a good life, but there's so many options it's hard to focus. Here's some advice to help people focus.

The two most important qualities of requirements that are useful to keep in your head as you write documents for projects are easy. All the others are dependent most likely when these two are followed. All the important documents for modeling requirements are covered when these two are followed. The ability for the documents to be understandable and communicate well will be enforced when the two qualities are paid attention to.

The qualities we seek to promote to the top of our analysis minds are testability and completeness. Let's call it complete testability for short. They have the most value for peer reviews because they are easily remembered. And they handle the different levels of granularity well addressing the needs of the single requirement task as well as the larger requirements in scenarios and entire projects.

Testability per requirement

To be testable we must be able to write a test for a requirement. If we say we are going to arrive at work, we can't test that because we don't have a metric to measure by. Business people have seen that kind of task irresponsibility led by vague promises and the need for follow-up at a specific time or day. An assigned task requires a test to see if it was completed.  Those metrics have to be known ahead of time.

If we say we are going to arrive at work and our only metric is that of time we have to give some sort of unit and measurement to the test. That unit is in minutes and that measurement will be at 8 am. Now I can send someone to my office and have them check to see if within one minute I show up at the office at 8 am. Test done.

Tests can not be formed from a requirement that is negative unless there is a constraint involved. If you said you will not be at work today, you have a negative requirement to test but limited by a time constraint. Still, a person would have to spend the entire day waiting for the event to happen from 8 am to 5 pm just in case you showed up for one minute to invalidate the test. There would be no end to the test if there were no constraint so no final test result would occur.

So a test requires both a positive event and a way to measure it with a metric. The metric involves measuring something with a unit. Writing the requirement this way creates a formal approach that eliminates most bad requirement writing.

Completeness per project

The other type of requirement quality that works to create great requirements is really not about the individual requirement. The completeness check is about traceability and dependencies throughout the project. If you know you have to be at work today, you might want to think about all the other days you want to be at work. This is looking at the requirement as a specific and making a generality out of it. You can find more requirements for days you should be at work this way.

The other type of completeness check involves the going from generality to specific. In this case of going to work, when you go to work, should you stay at work all day? Can you leave for lunch? How long should you be gone before it's not seen as being at work a full day? If you drill down into just the lunch period for more detail, then you can ask if taking a longer lunch but talking about work counts as working?

Another type of completeness check might involve switching the role or actor of that scenario or looking for alternate scenarios. Is the work day the same for all job titles? How about if we talk about a different type of work activity like being on the road? Now being at work today takes on a different approach. Also in the scenario of a use case, we can ask if we captured all of the sequential tasks of achieving our goal.

Even looking at one specific quality for completeness can be rolled up into asking whether we have captured all the qualities that should be checked for all of our requirements. Now we are asking as an analyst about what we have forgotten. Sometimes, it an implicit requirement that no one has said but everyone expects. Or maybe it was just forgotten. Don't let this lack of completeness analysis turn into scope creep you blame on the users.

Complete testability

I do like completely understanding and testing the requirements so much that the documents I write can be used in their entirety of content and structure as test documents for successful test cases. It's also in my mind the reason that programmers have taken to Test Driven Development which is just making requirements testable in code editors because they didn't get good requirements in the first place and feeling that a giant methodology is an administrative waste of time.

If the developers can change how they write code, surely the analysts can change to a more structured approach that keeps emphasizing the testability and completeness of the requirements document. It does mean that an analyst now can't just document what they hear. They will have to apply a judgment using some rules about what is complete and testable for their environment.

But the end result will be that the business or individual will end up with a better and cleaner implementation in code or business process, that will be easily tested and create more feedback to improve the quality of the end product.

Thursday, July 28, 2016

Ranking rain and risky business events with confidence in strategy and testing

Risk isn't measured easily. It's better viewed as a personal understanding of probability applied to avoiding unpleasant situations. It can also measure positive chances but we'd rather know what the chance of rain is instead of the chance of sunshine. The point of risk management is to find out where risks are and how to mitigate them if possible. Why are we obsessed with measuring it when it's really a useless number?

I am amused by meteorologists who say they measure the probability of rain by saying they measure the area that will get rain as a percentage of the total area and multiply that by their confidence. Nobody really understands what 40% means but that's math. We like percentages for rain and risk because in repeated forecasts, we have something to compare whether it really rains or not for that one time. But the weather guys spend more time than project managers do on that so we'll go with their best guess on risk measurement.

Probability of events

Probability, and therefore risk, starts with measuring outcomes of similar events. To measure anything in a business metric so you can compare results, you need a quantity of a standard unit. Math is funny. When you divide the number of useful past events by the total past events you lose your units of measurement. So you don't have any probability units. There is nothing that assigns an IT project to have 4 RUs (risk units) and another 15 RUs. So probabilities of different kinds of events can't be compared. That's our first problem.

Risk has to be related to a concrete event. Rain forecasts assume a probability of occurrence from a large number of the same types of events and as a stand-in we use similar historical events. A 50% chance of getting heads on a coin toss doesn't mean anything after one coin toss because it's one event. Your only two outcomes are heads and tails. There's no 50% tails. But a history of  500 tosses shows that heads is 50.3%. Now we have some confidence.

Confidence in chance

Confidence is what gets degraded though when we don't have similar events. In projects, we never have the same events so, like the thousands of influences on a storm system, if more of them are about the same, the confidence level of the forecast increases, which increases the chance of rain. Confidence is easier to do when the events are less reality based and can be easily counted. Coin tosses dominate the probability world but even tossing a coin has been shown to be not that easy.

Most business people don't have a backlog of 50 similar sized projects of the same type run by the same project manager with the same budget. The more events with the same influences, the better the confidence value. So confidence is easily eroded making our probability of an event less likely making our calculations less helpful. That's our second problem. We can make an educated guess of how likely the event is based on past events of the same type but the necessary confidence reality check will destroy a risk result.

How priority affects risk

Priority is not risk. Priorities are best assigned to scope units called work packages or use cases that estimate that total value. I like ITIL's breakdown of priority into impact and urgency and adapt it from the incident or trouble ticket to a use case. ITIL puts the value in terms of numbers of people that are affected (impact) and the value perceived (urgency). . For internal projects it might be expressed as a high, medium or low number of employees and high to low level of visibility to executives or high-level managers. For external projects it might be expressed as a high to low market share, and a high to low price point compared to competitive products.

In fact, priority does not influence risk at all. Risk can be multiplied times a priority ranking to show how value is reduced but the end result of a subjective number times another subjective number isn't useful except to compare to each other. Priority is good for ranking order of development of use cases. It's also good for ranking test case execution effort and thoroughness after factoring in development complexity and unfamiliarity of the unit under test.

Project risk

There are two easily confused types of risk in a project. There is project risk which occurs during the project. And there's organizational risk which is taken as a bet when the project is chartered and realized when the value is gained.

PMI defines project risk as "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives." But using that definition, since all future events are uncertain, means that all project activities are risks. Even if we declare risk to be the measure of the likelihood of a project failing to meet objectives, we're still in the same boat because any event could conspire to reduce the full obtainment of the objectives.

Project risk bets are taken mostly in design. The bet on the scope to be delivered starts when you spend money towards a design and ends when the product/code is delivered using that design. Strategy and analysis phases can influence design but design commits you to a path. These are all known risks and estimated up front. But there's always a component of unknown risks like developers changing jobs or emergencies but that's what project managers have 'flex time' for.

I have seen risk associated with failure to do adequate analysis called scope definition risk. That's the same baloney as when people call bad requirements gathering down the line, scope creep. This isn't project risk, it's systemic and therefore organizational risk. Check your process maturity and start there with a project constraint. Since you know you don't have good enough requirements management, you should expect some sort of decay around the elicited scope. It's a given, not a risk.

Organizational risk

The estimated value or dollars associated with the project which guides the project manager are broken up into three critical success factors. These three things that make a project succeed are
  • finishing on time
  • coming in on budget
  • delivering total decided scope
The value measurement units that we can use for each of these are standardized into 
  • number of days
  • amount of dollars
  • use cases
The first two can be reduced to similar units of dollars. Days is just a measure of cost of overhead, labor, and other consumed resources. Scope is harder. If you are using good requirements estimation units as recommended in my blogs, you should choose a scope measurement unit that delivers those units of value known as use cases and user stories in the analysis world or work packages in the project management world.

Then you can use a standard conversion of like sized use cases to the approximation of PMI for work packages of up to ten days depending on complexity. So, input from the developers who gauge the difficulty of a use case is critical. Scope estimation is a similar process to test case effort estimation. It might break down to
  • low - 2-4 days
  • medium - 5-7 days
  • high - 8-10 days
The budget component should be constrained at the point at which the value can be assessed before the decision for approval is made. At that point, if the time component is constrained, that creates no flexibility on the scope component and it means that only a certain unknown amount of value can be delivered creating high anxiety for the project manager who hopes that the scope is minimal, the complexity is low, or the developers are wizards. Once the requirements have been analyzed, it may be the that the solution to the project is too big and the scope can't be delivered successfully leading only to a project cancellation and sanity preservation for the enterprise.

If the time component is flexible, then there's still some angst about the outcome but now with some adjustment ability. Using employees over time will still accumulate project cost so knowing the amount of time necessary to deliver a solution may push the expenditures over what the project value is. So where's the risk here if the value is estimated, the time is estimated and the scope determines whether the project is green lighted or not?

The bet on known risks is taken at chartering and cashed in when in production. But unknown risk is in the changing influences on the solution which is a strategy responsibility. A Pokemon Go clone is only good if the market has not produced any other competitors or people feel like they have found enough Bulbasaurs or Meowths.

Strategic constraints subject to influence now become sources of risk. Events expected to happen that influence the planned time for the project or the budget must be listed. How do you judge a probability of occurrence of a one-time future event? You don't. You just put a number on it and rank the items by that. It's all subjective. And the next person puts a different rank on it and then you compromise. Instead of risk, let's call it a proposed ranking of possible scenarios that affect value.

It's all a guess

But because the infinite number of influences can never be known, it's not going to be that useful to collect historical information on what is recorded since the past may not accurately reflect the future. And it takes quite a bit of time to collect, enter, write up those reports, and present. Maybe it's better to just go with your gut when you need to rank risky events. Or see if your arthritis pain is flaring up like it does for weather changes.

Monday, April 25, 2016

The new web with AngularJS 2 & TypeScript - a browser marriage, not a date

The world of web architecture is undergoing its third major overhaul and there’s still more to come. This third version of the web development architecture is centered on the browser as a stand-alone engine so that mobile apps and desktop apps have little differences, developers can write an app and then easily use it on a desktop or mobile device or even embed it in a proprietary mobile app. This is what folks are calling a Single Page Application (SPA).

As web developers, we've discovered the benefits of having static pages, then having our entire web site be pre-processed by the server, and now, since jQuery gave us the confidence to move some of that UX processing to the browser, we’ve been slowly moving more and more to the client. But a client-side web app is a marriage to the browser and not just a date.

Angular's popularity

In the last few years, the client-side processing world has matured, although you may think it’s still in its adolescent phase, but it’s showing enough promise and enthusiasm to garner attention from the big players out there. The buzz is concentrated around Google’s AngularJS web framework which manages a client-side set of views and handles some I/O. Microsoft thought well enough of it partner with Google to provide a much needed improvement for JavaScript called TypeScript. Google dropped development of a similar language and supported TypeScript as a primary language for AngularJS 2, just released in beta as of this January.

Angular also matured through five years of work to find a way to reuse web components like they do in other types of software. But the web is stuck with an amalgam of HTML, CSS, and JavaScript that requires the assistance of the browser to make it all work. And contending with the offline nature of mobile apps put a real wrench into traditional coding. We have to have our own database storage, routing features between pages, and communication with other servers. And we can’t hide behind those protective walls of the secure server.

Google has done a masterful job of keeping a framework both powerful enough to satisfy hungry developers for lots of complexity and customization but making it simple enough to be used by both designers and jQuery level developers. It’s got all the magic of RxJS observables (really? you’re still working on Promises?) but we see patterns to help us code easily with AJAX. AngularJS 1 had this problem of too much high-level architecture and a brick wall that kept them from adding some great features that we now get to have in Angular 2.

The only other major competition to Angular has come out of Facebook’s React framework. It has less buzz and more complexity than most developers need. Unless of course, you’re developing large systems like Facebook. But it looks like it's still morphing into better versions and a few years will give us a better understanding of how to use it.

Working with status quo and my class

In business, there’s a need to keep moving ahead but still use the resources that have been paid for. The frameworks of .NET and Java still rule the enterprise web sites and in order to make Angular play well with these guys, it will take some thoughtful redesign. Remember that these server side frameworks do all the rendering of the web page before the page is sent to the browser. There on the browser is where Angular expects to do its stuff but it makes more sense to do secure data access back on the server and generate a page there for the most part. What we are looking for in a client-side application is to improve that and find the best of both worlds.

The improvements that we are able to get from Angular include the ability to improve the user experience through a faster page rendering. We also make gains when we are able to design a great reusable library of web components that designers just drop in to pages. Those can be excellent advantages if we learn how to trim back the features we've been using on the server and now concentrate on the asynchronous data access and microservices challenges before us.

For the last four months, I’ve been developing a class for my employer, Centriq Training, in AngularJS 2 and TypeScript that I think will meet the need of the business user from the component developer on desktop and mobile to the designer who thinks about the future of web components possibly using Google’s Angular Material. It's been the most difficult course I've written.

It's been a challenge since minimal documentation was available to start, other published material was poor except from the Angular developer community (God love you Pascal Precht and Victor Savkin), and documentation continues to be updated by the user community. The AngularJS 2 API docs have a lot of "WHAT IT DOES - Not yet documented"in them. But, hey it's still beta. For the most part, it's midway between the massive headaches of the patchwork packages without a framework I see with JavaScript solutions and the tried and true .NET frameworks.

I'm always seeing improvements and industry support which indicates that TypeScript/AngularJS 2 is going to be a leader for years to come. The editors I use are a delight. For free use and my class, it's Microsoft Visual Studio Code, which has nothing to do with the Visual Studio editors for C# and other languages. (Do they know they are following in the footsteps of confusing Java and JavaScript?). That and JetBrains' WebStorm are the best tools available so far.

If the transition to responsive design and mobile apps didn't overwhelm you and you want to see how the next wave is going to improve things, it's definitely worth a look to look at these mature products that will continue to have an effect on the web architecture for the next five years easily. Hope to see you in class!