Monday, October 31, 2016

Lunch Seminar: Thu Nov 3 - Is Angular the future of web development? WebAssembly is coming.

If you are interested in my last blog, Is Angular the future of web development? WebAssembly is coming, then you might be interested in my presentation of the architectural topic with more detail at my school here in Kansas City, Centriq Training. Sign up at the event site.

The talk lasts about 2 hours and a free lunch is provided. I think strategic technical planners will get lots of value out of the overview.

Friday, September 23, 2016

Is Angular the future of web development? WebAssembly is coming.

Web development has passed through three major architecture patterns. And a fourth is on the way which will add more choices for designers and keep developers busy as they blend the types both by design and confusion.

Google's Angular framework is an example of the latest style of architecture and with the release of Angular 2.0 has demonstrated that the Single Page Application is getting easier to develop. But it's not just another framework. It's on the architectural path that the web is taking in general. The question I want to address is whether or not Angular will be a long-term solution for the web. In doing so, the choices will become clearer how other styles are appropriate or not based on business needs.

The early server side web site

The 1.0 web, a term used after the web had evolved some, was the basic setup where the developer created pages in a text editor, loaded them on the web server, and a browser requested them for viewing. That's it at a high level. There were millions of documents that were available but no search engine like we have today. Directories indexed files like librarians cataloged books. Most of the cutting edge activity occurred through the 1990's but business and other people learned how to make static web pages well into the 2000's.

Web 1.0 - Development workflow for static web pages

As web servers evolved to handle more and more processing besides just delivering pages to viewers, form processing, state management, and templating systems increased in complexity. Eventually, the server became the point where pages were built using a favored language injecting data stored in a data base. .NET, Java, and especially php were the dominant processing languages for creating a dynamic web site refreshing the page for every viewer and every request.

Web 2.0 - Dynamic web site with server side page generation and client side JavaScript UI

This style I associate with the Web 2.0 name, has been the driving enterprise architecture since the 2000's and continues to dominate page generation for most corporate web sites. The web server has enlarged its scope of functionality to add features that caused the name change from web server to application server on the Java side. The Microsoft side just adds features. And now nginx is becoming the dominant web server, taking over Apache's httpd server lead as the top dog for handling web requests, due to its speed and low or no cost.

The web 1.0 returns

The architecture where the web server does no web page processing may seem like a historical starting point of design but it's really a better solution for achieving the fastest possible speed for a page request. Other use cases emerged in the last few years that put typical Web 2.0 server processing operations in the development workflow instead of the on-request process occurring during the page request.

For instance, if you have no or little updating to do on your web site, say once a month like many blogs need to do, you don't need to provide an up to the minute current state of the web page provided by processing out the page on every request. The hack that added complexity to the design to overcome that problem was caching but it has issues, so much so, that the recent HTML5 caching proposal has been shelved.

The easier solution is to create the pages in the development phase once or as needed by the business, upload them to the server and not do any page building on the server. This is called static site generation and can be achieved will any language. Templates, JavaScript, CSS, and any kind of technology can be used and the user will only get the fastest possible delivery.

Web 1.0 - Development workflow for static site generation

Other types of non-server processing workflows in the development cycle can be cobbled together with tools to advance the poor state of affairs with the old technologies of CSS and JavaScript. Recent advances to help build better, faster, more scalable code have resulted in multiple languages to transpile, or convert into another language, new languages like CoffeeScript, Babel, and especially TypeScript to JavaScript. LESS, SCSS and SASS are the leading languages to transpile into CSS providing an easier set of style rules to update and manage.

Web 1.0 - Development workflow for CSS

Web 1.0 - Development workflow for JavaScript

My personal favorite tool I use in my development classes for these types of enhancements is Microsoft's Visual Studio Code due to its great support for TypeScript and extensions that keep me from making a patchwork of grunt script calls to JavaScript packages. But JetBrains' WebStorm might be my pick if I were a paid programmer because of the amazing feature set it has.

The mobile 3.0 web app

So as the browser expanded to handle the needs of providing the user with a better experience, the renderer sped up, and the JavaScript engine sped up immensely, making game developers much happier. JavaScript's ability to process so much so fast enabled the UI of a web site to support a full application and add more features like the web server did. Mobile phones and tablets were trying to provide the same experience but with less bandwidth so the round trip to the server to pick up resources was not a good idea.

Web 3.0 - Web app with client side execution using scripts

The transfer of resources and the processing of them on the browser was the ideal and then managing all the operations of an application except for supplemental data access and security became the goal. Even those services are outsourced leaving the application with the ability to be wrapped in a hybrid application shell like Ionic so it could be deployed using a marketplace service from Apple or Google. Google recognized the need for outsourcing the basic non-client services and has been putting a lot of effort into their Firebase project.

The UI operations handled previously by a proprietary technology like a Java applet, Silverlight, or Flash, now with the demise of  those non-stardard add-ons, have to be managed with the only possible solution on the browser of JavaScript. But our ability to do full fledged client-based applications is increasing as we are able to produce better code for application building libraries.

Angular is that new kind of application framework that has finally figured out a blended solution for how to take bundled resources of HTML, CSS, and JavaScript and manage the user interaction on the browser so that no server side page building needs to occur. It uses a library of components to interact with for the app that you build into a browser's web page. Now it's a web app and not a web site. The routing, controller logic, data binding, and templating are all features of its client side framework that creates the single page application.

Execution environment maturity

An execution environment is anywhere where code runs under a controlled context. for example, the database server has stored procedures, the mainframe has COBOL apps, the midrange computers have RPG programs, PCs have Windows and Macintosh apps, and phones and tablets have proprietary apps. Even embedded systems are under the control of a type of small to large chassis that takes care of the computing such as an ATM machine, a cash register, or a flight recorder. We're seeing the flourishing of familiar development environments Linux and Windows for Arduino and Raspberry Pi single board processors.

But the systems evolved as did other execution environments. Code structures all started with a basic script. Then the code became structured with reusable functions. And then libraries evolved to package up groups of cohesive operations. And then those scripts became faster by compiling and building the code into a machine readable format. This is the maturity cycle that has happened and is happening with web languages.

The browser is an embedded execution environment. It runs under another execution environment but by a set of standards that allows for the same results to be expected from any OS and any brand of browser. The standards allow us to see the browser as a viable execution environment to code to. It's even possible to provide the browser execution environment without an OS as we've seen with the introduction of the Chromebook.

The final stage of web programming

So with the emergence of client side programming on the web, we have begun the full acceptance of the web browser as an execution environment which supports the mobile crowd well. We will see the maturity of the Web Components ideas both supported by and combined with JavaScript frameworks for many years to come. Polymer is an excellent example of this. The Angular framework now looks like a jQuery UI plug in framework for browser apps and will continue to grow with more "plug-in" libraries like the current Angular Material project.

But the final 4.0 step will be when we convert our JavaScript code into compiled code. This is now on the blackboard. It's called WebAssembly. We will keep our browser JavaScript APIs only they will be available to us in every major language like C#, Java, VB, php, and of course, JavaScript. Then we will compile our code down to the WebAssembly format and load it on the browser. All major browsers have committed to supporting this standard so they will add an engine to execute the code replacing the JavaScript engine's functionality increasing the speed of our apps around 10 times.

Web 4.0 - Web app with client side execution using compiled code

The structures of client-side development that we are designing will all be stable and useful but the code will be done in Visual Studio or Eclipse and new IDEs. I am guessing that the ability to start doing this kind of work will be in another four years or about 2020.

Angular's future

So, does this mean that Angular will fade away? Not likely since it's been the trailblazer for a successful framework in JavaScript. I expect to see clones in most languages for WebAssembly like Jangular and Nangular as they might be called. But the dominance of JavaScript as a primary language, the need for TypeScript, and the requirement to know the JavaScript set of development tools are all going to be less important in the near future.

I'm excited for the future of the web. It's not the web as we knew it. It's a new development environment that will influence other environments such as television and other places where we might see an embedded browser execution environment. Maybe we'll see new types of hardware environments emerging that tailor to VR instead of just putting the browser to poor use on our web-enabled refrigerator.

Whatever it is going to be, we can be assured that the browser has blossomed fully as a development environment and the rate of design change and churn of projects related to that will slow considerably. The design choices are stable, the languages are looking stable and it's time to mature our processes surrounding the use of the web.

Monday, August 29, 2016

Complete testability - analysis quality essentials for the confused

Requirements are usually described as having an intertwined bunch of qualities we aspire to have. In reality, it ends up like a person admonishing you to lead a good life. Engineers took a lot of time listing these qualities out in documents like the IEEE Standard 830-1998 superseded by the more impressive 29148-2011 defining qualities like

  • measurable
  • testable
  • consistent
  • correct
  • complete
  • unambiguous
  • ranked
  • modifiable
  • traceable
  • feasible
  • independent
  • necessary
  • non-redundant
  • terse
  • understandable
Most people don't want to take the time to assess a list of qualities like those on a set of a thousand requirement statements, nor do they think it has any value. They're right but they probably don't know why. They want their requirements to lead a good life, but there's so many options it's hard to focus. Here's some advice to help people focus.

The two most important qualities of requirements that are useful to keep in your head as you write documents for projects are easy. All the others are dependent most likely when these two are followed. All the important documents for modeling requirements are covered when these two are followed. The ability for the documents to be understandable and communicate well will be enforced when the two qualities are paid attention to.

The qualities we seek to promote to the top of our analysis minds are testability and completeness. Let's call it complete testability for short. They have the most value for peer reviews because they are easily remembered. And they handle the different levels of granularity well addressing the needs of the single requirement task as well as the larger requirements in scenarios and entire projects.

Testability per requirement

To be testable we must be able to write a test for a requirement. If we say we are going to arrive at work, we can't test that because we don't have a metric to measure by. Business people have seen that kind of task irresponsibility led by vague promises and the need for follow-up at a specific time or day. An assigned task requires a test to see if it was completed.  Those metrics have to be known ahead of time.

If we say we are going to arrive at work and our only metric is that of time we have to give some sort of unit and measurement to the test. That unit is in minutes and that measurement will be at 8 am. Now I can send someone to my office and have them check to see if within one minute I show up at the office at 8 am. Test done.

Tests can not be formed from a requirement that is negative unless there is a constraint involved. If you said you will not be at work today, you have a negative requirement to test but limited by a time constraint. Still, a person would have to spend the entire day waiting for the event to happen from 8 am to 5 pm just in case you showed up for one minute to invalidate the test. There would be no end to the test if there were no constraint so no final test result would occur.

So a test requires both a positive event and a way to measure it with a metric. The metric involves measuring something with a unit. Writing the requirement this way creates a formal approach that eliminates most bad requirement writing.

Completeness per project

The other type of requirement quality that works to create great requirements is really not about the individual requirement. The completeness check is about traceability and dependencies throughout the project. If you know you have to be at work today, you might want to think about all the other days you want to be at work. This is looking at the requirement as a specific and making a generality out of it. You can find more requirements for days you should be at work this way.

The other type of completeness check involves the going from generality to specific. In this case of going to work, when you go to work, should you stay at work all day? Can you leave for lunch? How long should you be gone before it's not seen as being at work a full day? If you drill down into just the lunch period for more detail, then you can ask if taking a longer lunch but talking about work counts as working?

Another type of completeness check might involve switching the role or actor of that scenario or looking for alternate scenarios. Is the work day the same for all job titles? How about if we talk about a different type of work activity like being on the road? Now being at work today takes on a different approach. Also in the scenario of a use case, we can ask if we captured all of the sequential tasks of achieving our goal.

Even looking at one specific quality for completeness can be rolled up into asking whether we have captured all the qualities that should be checked for all of our requirements. Now we are asking as an analyst about what we have forgotten. Sometimes, it an implicit requirement that no one has said but everyone expects. Or maybe it was just forgotten. Don't let this lack of completeness analysis turn into scope creep you blame on the users.

Complete testability

I do like completely understanding and testing the requirements so much that the documents I write can be used in their entirety of content and structure as test documents for successful test cases. It's also in my mind the reason that programmers have taken to Test Driven Development which is just making requirements testable in code editors because they didn't get good requirements in the first place and feeling that a giant methodology is an administrative waste of time.

If the developers can change how they write code, surely the analysts can change to a more structured approach that keeps emphasizing the testability and completeness of the requirements document. It does mean that an analyst now can't just document what they hear. They will have to apply a judgment using some rules about what is complete and testable for their environment.

But the end result will be that the business or individual will end up with a better and cleaner implementation in code or business process, that will be easily tested and create more feedback to improve the quality of the end product.

Thursday, July 28, 2016

Ranking rain and risky business events with confidence in strategy and testing

Risk isn't measured easily. It's better viewed as a personal understanding of probability applied to avoiding unpleasant situations. It can also measure positive chances but we'd rather know what the chance of rain is instead of the chance of sunshine. The point of risk management is to find out where risks are and how to mitigate them if possible. Why are we obsessed with measuring it when it's really a useless number?

I am amused by meteorologists who say they measure the probability of rain by saying they measure the area that will get rain as a percentage of the total area and multiply that by their confidence. Nobody really understands what 40% means but that's math. We like percentages for rain and risk because in repeated forecasts, we have something to compare whether it really rains or not for that one time. But the weather guys spend more time than project managers do on that so we'll go with their best guess on risk measurement.

Probability of events

Probability, and therefore risk, starts with measuring outcomes of similar events. To measure anything in a business metric so you can compare results, you need a quantity of a standard unit. Math is funny. When you divide the number of useful past events by the total past events you lose your units of measurement. So you don't have any probability units. There is nothing that assigns an IT project to have 4 RUs (risk units) and another 15 RUs. So probabilities of different kinds of events can't be compared. That's our first problem.

Risk has to be related to a concrete event. Rain forecasts assume a probability of occurrence from a large number of the same types of events and as a stand-in we use similar historical events. A 50% chance of getting heads on a coin toss doesn't mean anything after one coin toss because it's one event. Your only two outcomes are heads and tails. There's no 50% tails. But a history of  500 tosses shows that heads is 50.3%. Now we have some confidence.

Confidence in chance

Confidence is what gets degraded though when we don't have similar events. In projects, we never have the same events so, like the thousands of influences on a storm system, if more of them are about the same, the confidence level of the forecast increases, which increases the chance of rain. Confidence is easier to do when the events are less reality based and can be easily counted. Coin tosses dominate the probability world but even tossing a coin has been shown to be not that easy.

Most business people don't have a backlog of 50 similar sized projects of the same type run by the same project manager with the same budget. The more events with the same influences, the better the confidence value. So confidence is easily eroded making our probability of an event less likely making our calculations less helpful. That's our second problem. We can make an educated guess of how likely the event is based on past events of the same type but the necessary confidence reality check will destroy a risk result.

How priority affects risk

Priority is not risk. Priorities are best assigned to scope units called work packages or use cases that estimate that total value. I like ITIL's breakdown of priority into impact and urgency and adapt it from the incident or trouble ticket to a use case. ITIL puts the value in terms of numbers of people that are affected (impact) and the value perceived (urgency). . For internal projects it might be expressed as a high, medium or low number of employees and high to low level of visibility to executives or high-level managers. For external projects it might be expressed as a high to low market share, and a high to low price point compared to competitive products.

In fact, priority does not influence risk at all. Risk can be multiplied times a priority ranking to show how value is reduced but the end result of a subjective number times another subjective number isn't useful except to compare to each other. Priority is good for ranking order of development of use cases. It's also good for ranking test case execution effort and thoroughness after factoring in development complexity and unfamiliarity of the unit under test.

Project risk

There are two easily confused types of risk in a project. There is project risk which occurs during the project. And there's organizational risk which is taken as a bet when the project is chartered and realized when the value is gained.

PMI defines project risk as "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives." But using that definition, since all future events are uncertain, means that all project activities are risks. Even if we declare risk to be the measure of the likelihood of a project failing to meet objectives, we're still in the same boat because any event could conspire to reduce the full obtainment of the objectives.

Project risk bets are taken mostly in design. The bet on the scope to be delivered starts when you spend money towards a design and ends when the product/code is delivered using that design. Strategy and analysis phases can influence design but design commits you to a path. These are all known risks and estimated up front. But there's always a component of unknown risks like developers changing jobs or emergencies but that's what project managers have 'flex time' for.

I have seen risk associated with failure to do adequate analysis called scope definition risk. That's the same baloney as when people call bad requirements gathering down the line, scope creep. This isn't project risk, it's systemic and therefore organizational risk. Check your process maturity and start there with a project constraint. Since you know you don't have good enough requirements management, you should expect some sort of decay around the elicited scope. It's a given, not a risk.

Organizational risk

The estimated value or dollars associated with the project which guides the project manager are broken up into three critical success factors. These three things that make a project succeed are
  • finishing on time
  • coming in on budget
  • delivering total decided scope
The value measurement units that we can use for each of these are standardized into 
  • number of days
  • amount of dollars
  • use cases
The first two can be reduced to similar units of dollars. Days is just a measure of cost of overhead, labor, and other consumed resources. Scope is harder. If you are using good requirements estimation units as recommended in my blogs, you should choose a scope measurement unit that delivers those units of value known as use cases and user stories in the analysis world or work packages in the project management world.

Then you can use a standard conversion of like sized use cases to the approximation of PMI for work packages of up to ten days depending on complexity. So, input from the developers who gauge the difficulty of a use case is critical. Scope estimation is a similar process to test case effort estimation. It might break down to
  • low - 2-4 days
  • medium - 5-7 days
  • high - 8-10 days
The budget component should be constrained at the point at which the value can be assessed before the decision for approval is made. At that point, if the time component is constrained, that creates no flexibility on the scope component and it means that only a certain unknown amount of value can be delivered creating high anxiety for the project manager who hopes that the scope is minimal, the complexity is low, or the developers are wizards. Once the requirements have been analyzed, it may be the that the solution to the project is too big and the scope can't be delivered successfully leading only to a project cancellation and sanity preservation for the enterprise.

If the time component is flexible, then there's still some angst about the outcome but now with some adjustment ability. Using employees over time will still accumulate project cost so knowing the amount of time necessary to deliver a solution may push the expenditures over what the project value is. So where's the risk here if the value is estimated, the time is estimated and the scope determines whether the project is green lighted or not?

The bet on known risks is taken at chartering and cashed in when in production. But unknown risk is in the changing influences on the solution which is a strategy responsibility. A Pokemon Go clone is only good if the market has not produced any other competitors or people feel like they have found enough Bulbasaurs or Meowths.

Strategic constraints subject to influence now become sources of risk. Events expected to happen that influence the planned time for the project or the budget must be listed. How do you judge a probability of occurrence of a one-time future event? You don't. You just put a number on it and rank the items by that. It's all subjective. And the next person puts a different rank on it and then you compromise. Instead of risk, let's call it a proposed ranking of possible scenarios that affect value.

It's all a guess

But because the infinite number of influences can never be known, it's not going to be that useful to collect historical information on what is recorded since the past may not accurately reflect the future. And it takes quite a bit of time to collect, enter, write up those reports, and present. Maybe it's better to just go with your gut when you need to rank risky events. Or see if your arthritis pain is flaring up like it does for weather changes.

Monday, April 25, 2016

The new web with AngularJS 2 & TypeScript - a browser marriage, not a date

The world of web architecture is undergoing its third major overhaul and there’s still more to come. This third version of the web development architecture is centered on the browser as a stand-alone engine so that mobile apps and desktop apps have little differences, developers can write an app and then easily use it on a desktop or mobile device or even embed it in a proprietary mobile app. This is what folks are calling a Single Page Application (SPA).

As web developers, we've discovered the benefits of having static pages, then having our entire web site be pre-processed by the server, and now, since jQuery gave us the confidence to move some of that UX processing to the browser, we’ve been slowly moving more and more to the client. But a client-side web app is a marriage to the browser and not just a date.

Angular's popularity

In the last few years, the client-side processing world has matured, although you may think it’s still in its adolescent phase, but it’s showing enough promise and enthusiasm to garner attention from the big players out there. The buzz is concentrated around Google’s AngularJS web framework which manages a client-side set of views and handles some I/O. Microsoft thought well enough of it partner with Google to provide a much needed improvement for JavaScript called TypeScript. Google dropped development of a similar language and supported TypeScript as a primary language for AngularJS 2, just released in beta as of this January.

Angular also matured through five years of work to find a way to reuse web components like they do in other types of software. But the web is stuck with an amalgam of HTML, CSS, and JavaScript that requires the assistance of the browser to make it all work. And contending with the offline nature of mobile apps put a real wrench into traditional coding. We have to have our own database storage, routing features between pages, and communication with other servers. And we can’t hide behind those protective walls of the secure server.

Google has done a masterful job of keeping a framework both powerful enough to satisfy hungry developers for lots of complexity and customization but making it simple enough to be used by both designers and jQuery level developers. It’s got all the magic of RxJS observables (really? you’re still working on Promises?) but we see patterns to help us code easily with AJAX. AngularJS 1 had this problem of too much high-level architecture and a brick wall that kept them from adding some great features that we now get to have in Angular 2.

The only other major competition to Angular has come out of Facebook’s React framework. It has less buzz and more complexity than most developers need. Unless of course, you’re developing large systems like Facebook. But it looks like it's still morphing into better versions and a few years will give us a better understanding of how to use it.

Working with status quo and my class

In business, there’s a need to keep moving ahead but still use the resources that have been paid for. The frameworks of .NET and Java still rule the enterprise web sites and in order to make Angular play well with these guys, it will take some thoughtful redesign. Remember that these server side frameworks do all the rendering of the web page before the page is sent to the browser. There on the browser is where Angular expects to do its stuff but it makes more sense to do secure data access back on the server and generate a page there for the most part. What we are looking for in a client-side application is to improve that and find the best of both worlds.

The improvements that we are able to get from Angular include the ability to improve the user experience through a faster page rendering. We also make gains when we are able to design a great reusable library of web components that designers just drop in to pages. Those can be excellent advantages if we learn how to trim back the features we've been using on the server and now concentrate on the asynchronous data access and microservices challenges before us.

For the last four months, I’ve been developing a class for my employer, Centriq Training, in AngularJS 2 and TypeScript that I think will meet the need of the business user from the component developer on desktop and mobile to the designer who thinks about the future of web components possibly using Google’s Angular Material. It's been the most difficult course I've written.

It's been a challenge since minimal documentation was available to start, other published material was poor except from the Angular developer community (God love you Pascal Precht and Victor Savkin), and documentation continues to be updated by the user community. The AngularJS 2 API docs have a lot of "WHAT IT DOES - Not yet documented"in them. But, hey it's still beta. For the most part, it's midway between the massive headaches of the patchwork packages without a framework I see with JavaScript solutions and the tried and true .NET frameworks.

I'm always seeing improvements and industry support which indicates that TypeScript/AngularJS 2 is going to be a leader for years to come. The editors I use are a delight. For free use and my class, it's Microsoft Visual Studio Code, which has nothing to do with the Visual Studio editors for C# and other languages. (Do they know they are following in the footsteps of confusing Java and JavaScript?). That and JetBrains' WebStorm are the best tools available so far.

If the transition to responsive design and mobile apps didn't overwhelm you and you want to see how the next wave is going to improve things, it's definitely worth a look to look at these mature products that will continue to have an effect on the web architecture for the next five years easily. Hope to see you in class!

Tuesday, January 5, 2016

Assertions & assumptions, eels & heels - Clear terms create understanding in analysis

There's several terms that cause confusion in analysis and sometimes they come in pairs. The two are preconditions / post-conditions and assertions / assumptions.

It's not that people get them mixed up, it 's just that they use them however they like. It's so much better when you know what you are saying and know why you are using the term appropriately. A clear term used properly is the beginning of analysis understanding.

It's not that I blame any particular reason for not using terms well because I know I had no clue when I was suppose to write up a project document and the people I was learning from had bad habits. It's like when my wife regretting doing something said she "felt like an eel." It's an obvious thing if you know the real use of the term but can cause confusion in those that don't have that experience.

Preconditions prevent functionality

In a use case, there is a particular section of preconditions which never is fully explained and ends up being a convenient way to start a use case from the middle of the scenario. My favorite example of this is when a use case requires the credentials of a user to be checked as part of the process. A log-on process is always part of a secure use case. It should be modeled as a part of a goal-driven use case.

Many people shortcut writing the full scenario by avoiding the repetitious log-in tasks and declaring that the user must be logged in as their precondition. So if logging on has changed the state of the system to a different state, that use case must be a complete use case that leads to value for that changed state. The log-in, no matter how many people log in, will not provide the business any value whatsoever. It's not a separate value-driven use case. Ever.

The precondition is a state of the system that must be true in order for the use case to progress to the first step that is unique to this use case. In the classic ATM use case example, you are prevented from doing a Withdraw Cash use case when the machine has no money and the menu option for withdrawing cash is disabled. That means that the precondition for the Withdraw Cash use case is that the ATM can pass the rule of allowing a user to withdraw up to their maximum daily limit or some other rule. The state was changed by a previous Withdraw Cash use case which did have value.

In a business use case where the user first drives to the bank's outside ATM to get cash, the precondition has to change some. The scope of the business system must include the full bank property and building. The system precondition is that of the machine having enough cash but business way to prevent a person from using the ATM is to put a traffic cone in the ATM lane or turn the system off. It's anything a manager would do that doesn't involve the software itself.

Post-conditions repeat important tasks

Use case post-conditions are usually defined as important things that need to be affirmed at the end of the use case tasks. That usually means repeating the important changes noted in the use case narrative. For instance, in the classic ATM example, a post-condition will be to make sure that the bank got the message of a withdrawal transaction at the end. But it doesn't have to be a state change. In fact, the state of the machine should return back to the way it was before the use case started if we are to follow the rule of being able to run a use case multiple times.

The post-condition of the bank transaction is that a message was confirmed to be sent to the bank by some means. In the design stage, this could be a callback function to the ATM or a log provided by the bank. Either way, it should be a task that could be audited.

Assertions are conditions too

Assertions are more of a programmatic style of pre or post-condition when the condition involves only that code. You assert that inputs matched your data rules, a technical precondition. You assert that outputs match your data rules or the state of something changes, a technical post-condition. The code for assertions is removed when deployed.

Assumptions occur after analysis

Assumptions however are not preconditions. You don't get to assume that your users are logged in. They are important decisions taken at design time when we need to guess and take a bet on the chances that something will occur. You assume that a health care system you manage will be affected by a possible repeal or replacement of ObamaCare.

You can understand what your stakeholders want in a sign-up site for your insurance and you know that they want something that follows and takes advantage of the current laws. But there's an uncertainty that is measured through a risk assessment process of whether or not the bill will pass. The project manager will be the one to assess the risk and make that bet on the value that could be gained or lost.

If the choice to take the risk is a good one, then the design team starts working on the system and while they are designing screens and systems, the assumption must be made that there will be a replacement for ObamaCare. Not all the risks that you assume in your design will happen but the choices you make may allow for you to reduce your time to market increasing the value of the system on the ones that do pan out.

Create understanding through clear terms

Terms, although deposited squarely in your beloved template, do not mean that they are correct. The best way to gain some understanding is to start questioning what the value of each piece of information is in your template. Corroborate that with the other team members. Update it, move it, refactor it, or delete it. Make it useful. Add an appendix if you have to. Then it will start being more understandable.