Tuesday, July 8, 2014

New Thoughworks Technology Radar out today

Four trends are identified in the Thoughtworks strategic IT report called Technology Radar published twice a year. The most important one for me is the one concerning JavaScript technologies and the challenge it brings for understanding. That is the reason that I'm developing my new course, JavaScript Powered Web Apps focusing on the language's application to building client-side logic for web pages and developing more with the browser. And it's been challenging without the guidance of book authors and coordinating corporations.
Here's the full excerpt on JavaScript:
Churn in the JavaScript World — We thought the rate of change in the Ruby open source space was rapid until the full rush of JavaScript frameworks arrived. JavaScript used to be a condiment technology, always used to augment other technologies. It has kept that role but expanded into its own platform with a staggering rate of change. Trying to understand the breadth of this space is daunting, and innovation is rampant. Like the Java and Ruby open source spaces, we hope it will eventually calm to at least a deluge.
The other three trends were 
  • Microservices and the Rise of the API (also somewhat aligned with the JavaScript trends), 
  • Conway's Law ("organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations") and
  • Re-decentralization 

Monday, April 21, 2014

JavaScript Powered Web Apps - new programming course

The world of web programming is moving ahead into what should be called Web 3.0 (the semantic web is just a pipe dream). It's using JavaScript as a unifying layer and re-imagining how the web can do what it does without the benefit of large back end frameworks. Web 1 was delivering files to a client. Web 2.0 was letting clients think they were in control by faking a desktop application over HTTP. And now we can have a true application that has been enhanced with networking to services in 3.0.

For the next several months I will be developing a new course to complement the classes that I wrote a few years back on HTML5, CSS, jQuery, jQuery Mobile, and JavaScript that bring these components together. I see tooling on web applications starting to mature and it is the right time to start to promote a new style of web application development. Except that there's no one way to do anything yet. Adobe is getting close to providing another great IDE with Edge Code but I think we're a ways off yet. Even Google might become a player here with their IDE code named Spark built with Dart and Polymer.

The new course, JavaScript Powered Web Apps, will walk students through building a "site" using combinations of node.js, nginx, SASS, Mongo, Bootstrap, Github, jQuery, jQuery Mobile, Grunt, AngularJS, Knockout, Express, etc. I'll probably do four days of sample sites and then show a web work flow and let students choose their own tools.

The course will assume some programming or design experience with a web site but most of the exercises will be scripted so that anyone can follow them. The students that likely will be most interested are those that don't have the back end skill set and see a tremendous advantage in learning only one language to do both front and back end coding. That means that anyone from high school that has learned the basics of HTML and CSS can enroll.

A lot of the training is on the administration and workflow of the tools which is harder to learn from books. I'll try to capture what I can for the exercises but if anyone has suggestions, I'm willing to listen. And as always, the class will continually be updated as the tools rev and newer tools emerge.

For instance, I'm still waiting to see how Famo.us is going to impact the animation tools. It looks incredible but won't be completely public until May 19th when HTML5 Dev Conference in San Francisco starts. And I want to spin up a partial.js site as well I think. But I'll never know until it's finished. After all, a project's requirements are never finished until the project is over.

Thursday, January 23, 2014

Documentation as a control mechanism - not! Think communication.

People associate documentation in Agile with waste. In fact, the Agile Manifesto prefers "working software over comprehensive documentation" but does that mean that the purpose of documentation is secondary to the output of the project? Agile set us up with a poor dichotomy. I mean I prefer getting a paycheck over driving to work. Maybe we're asking the wrong question.

I don't see documentation as an optional part of the project. It's definitely an output and can be measured and that is the allure of the artifact. The traditional usage of documentation in a project is as a control mechanism when I look at the process outputs. It's often managers and specifically project managers who finalize a phase with the documentation. It's what they do. They design their work package by understanding the scope of the effort and if there is something measurable at the end it becomes a good work package.

Back in the darker ages of procedural coding, there was a movement to measure code by what it did. That entailed putting an estimate on the smallest granular operations of the computer in the code itself. That worked for awhile when the code was consistent in its granularity. But code has changed and what we can hide in a line of code has become enormous. The function point estimation methods died.

Now we estimate, if at all, with a measurement of work package time completion and complexity (from the programmer's perspective) or confidence (from the project manager's perspective). The dependency is on the work package design. If we don't have a good description of what it is we're going to do, we can't plan it. Many projects flounder from a lack of analytic description.

So where does this good description come from? Well, your requirements are the place to find these descriptions. Of course, the best requirements are ideally the needs and wants of the stakeholders massaged in to testable work packages detailed down to repeatable tasks so that no business questions have to be asked constrained by project limitations. But in reality, they are more of a garbage dump of what people said in excruciating long meetings.

The requirement document gets written and then we're on to the next phase. Follow along on your Gantt chart please. As a programmer, my input should be the output of the analyst. But with all the hubbub in programming circles about how to organize and manage testability, it looks like the analyst isn't doing a very good job. I don't see much evidence of usable requirements from my point of view either.

So, the Agile people are right. Processes without consumers are pure waste. Let's right-size this documentation by eliminating it. No one used it anyway. But what are we losing? We're losing the ability to record a decision and the to think about the design of the business. Of course, if it wasn't good for the programmer, then it was useless.

But let's consider that the analyst can benefit the programmer. Then the documentation becomes a stepping stone to better code. Then there is communication of the needs of the business. Then there is less thinking on the role of the programmer who gets to focus on writing well-structured and reusable code.

The role of the documentation is really that of communication when the project scales. If you are the sole stakeholder and programmer, you probably have all the requirements circling in your head at any one time. No documentation is necessary. If two people know exactly what has to be done after a good agreeable meeting, no documentation has to be created. But if there is a memory loss, a sick day, a new member to the team, you will need some documentation. The need for documentation increases as the need to communicate increases.

The management of documentation needs a metric instead of lazily setting the goal to that of creating a greater return on investment (ROI). Just how do you measure that? My standards for measuring are in the more subjective realm whereby you produce the documentation, ask the user of that documentation if they understand, and then see if they are able to do their tasks without any further questions. The quality score descends as the need for answers or the amount of perceivable confusion increases. Get feedback.

Documentation is not a gate to the next phase and to be signed off. I'll take the stance a little further than the traditional "living document" style of writing. Since it is to be a communication mechanism, it has to always communicate the current understanding of what the project is about. Anyone and everyone can be a contributor but the use cases / user stories / work packages should be maintained by the analyst / technical writer role so that they achieve the best level of testability and detail. Wikis are good.

The trend of "barely good enough" documentation, I think, is allowing the programmer to use their analytic skills in place of poor business analysis skills in the workplace, which is sad but the best workable solution to getting the job done. Stop producing unusable business documents and let the programmer get on with the code. What are programmers strongly in favor of commenting their code and Test Driven Development? Because those are the tools that get the documentation done a better way.

So, lets eschew the notion of controlling the project by requiring the project members to produce a result that isn't used in the next phase. Control the project by understanding the work package completions. The artifact that completes the work package is the code or the pseudo-code (the use case) in some form or another, not a project document.

So, is documentation is secondary to the output of the project? Getting a paycheck over driving to work is comparing the result with a task for getting that result. It depends whether you have to go to the office or not. Documentation is not secondary. It's just a question of whether you need to communicate more or not.

Wednesday, May 22, 2013

Thoughtworks Technology Radar

Free software, free books and Technology Radar. If you've sat any of my classes, you've heard me talk about those three things every time.

I have been reading Martin Fowler's writing for most of my IT training career and have found him practical, in-depth, and current. He also satisfies that craving for a little higher level analysis thinking that developer blogs don't always cover well enough exposing their lack of experience with other technologies and limiting their authority on changing my opinion.

The new edition of TR is out today. It's taken me years to work through some of the recommendations that they've put together in their think tank, ThoughtWorks, that are considered to be the cutting edge worth keeping abreast on. They also don't mind telling you when a technology is not worth your time. Both are worth my time to read and understand.

The trends highlighted in this issue are:

  • Falling boundaries - cloud development, co-location, perimeterless enterprise
  • Proven practices to areas that missed them - CSS frameworks, database migrations for NoSQL, etc.
  • Lightweight analytics
  • Infrastructure as code
The four major areas that are reviewed are
  • Techniques
  • Platforms
  • Tools
  • Languages & Frameworks


For me, being mostly a web developer, I was happy to see that the Adopt recommendations in the Techniques section were for mobile testing on mobile networks moving away from the fake simulators as well as using promises for asynchronous programming, giving assurance of feedback for us JavaScript/AJAX coders.

The next level of recommendation down from the Adopt level is the Trial level which should be approached with a little more thought. You see HTML5 storage replacing cookies and Mobile First here as well as responsive web design. I agree with all those because they're not total solutions to a problem. What's interesting is their lack of concern for exhaustive browser based testing in the Hold level which means don't worry about it.

For Windows and Powershell people, you'll be glad to know that Chef, Puppet and Octopus (automated deployment of ASP.NET apps without PowerShell)  to support infrastructure tasks has made Windows automation a much better choice.


Martin is following the NoSQL movement very closely and has put MongoDB as the choice for his Adopt level. CouchBase, Hadoop and BigQuery are down one level in Trial. Node.js is down there too probably as a technology too green to make it worth our while yet. But I'm waiting for their take on Polymer and Meteor. Also interesting in the next level up from the basement, Assess, is PhoneGap (Apache Cordova) and Zepto.js, the smaller relative to jQuery.

I wasn't surprised to see WS-* holding the bottom place in the platforms with REST taking over web services slowly but surely.


I'm using NuGet for .NET development and was happy to see it in the top level. Check out Chocolatey NuGet as well if you do Windows administration. Maven is on Hold.

I've been searching for that right observer component that Google's Angular.js, Knockout.js  or Ember.js cover but also includes the whole MV* framework thing which I have covered with either a Java or .NET web framework. Reactive extensions for .NET (RxJS) didn't fare too well in the Assess level. My choice here for a solution to the observer pattern could be ReactJS or RxJS. 

The one tool that surprised me was D3. I have been recommending Raphael for JavaScript charting and watching D3 some but it shows up now in the Adopt level due to better complementary libraries such as Rickshaw and Crossfilter.

Languages and Frameworks

CSS frameworks like SASS/SCSS and Compass were staying in the Adopt level. The web apps that are moving away from traditional client/server architecture have much to learn yet but many frameworks are beginning to have business value so that they show up on the Trial level. These are HTML5 for offline applications, JavaScript as a platform and JavaScript MV* frameworks. Twitter Bootstrap also shows up as an Assess. But Backbone.js  and handwritten CSS are as good as last year in the Hold level.

A surprising observation was that Team Foundation Server caused productivity problems as a version control system. ThoughtWorks recommends Git, Perforce, or Subversion instead. It's a good thing that Visual Studio works with Git.

And just when you thought analytics couldn't get any better than Google Analytics, they see great promise in the data set aggregation and AWS/Hadoop management of your billions of web hits with Snowplow Analytics.

I'm sure I've missed some recommendations and packages people are using such as that new fangled language Mel Tillis and Kenny Rogers started about a paralyzed vet's wife going in to town for the evening without him. But read through the assessment and mine the results for some great improvements to your technology stack.

Tuesday, July 10, 2012

HTML5 and future of the universe

Nesting Dolls Web site people are always catching up. You get involved in an extended project and by the time you're done, they've changed the rules again and it's time to learn a new technology. I used to think this was a problem. Now I see it as a responsibility to manage the information of an ever expanding set of hardware devices that are becoming digital.

As these digital devices become of age, they mature into a web access point because of the value it adds to the device. Some devices like tablets are born with the ability to talk to the web at birth. Others, like DVD players, had to wait until they grew up.

HTML5 and CSS3 are expanding to fill the needs of the digital appetites of these new devices. As we innovate to market test a device at every screen size possible from phone to tablet to TV, the older development technologies will continually be updated until they break. So far, HTML and CSS are holding their own. CSS not so much. Even the old dog, JavaScript, is doing well but has some sturdy jQuery and CoffeeScript crutches.

The raft of programming languages not originally focused on the web that try to manage that environment push the simple text retrieval process to be more app like. In their desire to improve the unique asynchronous request-response communication model, they, in my mind, will eventually destroy it. Face it, it's slow. You can do faster communication with AJAX which is why you see Google using so much of it on their web apps. It's why jQuery Mobile designed their GUI library for speed with AJAX so much that it's the primary communication model that tolerates the web request model.

Now a newer model of finer grained I/O control in JavaScript is starting to appear called the WebSocket API. Did you not see this coming? It's not the end until each language has a way to use a simple library of functions/methods to talk to any device of your choice.

Languages are shifting little in popularity. A recent TIOBE trend announcement showing that the iOS platform language of choice, Objective-C, has been the faster gainer over the last two years to take the #3 spot on the chart, is the only major change. With Java trending downward and losing the lead to C, a venerable stalwart, I'm not seeing where anybody is picking up the slack. C# is on a slow trend upwards but not that much. Even php is losing ground.

My guess is that JavaScript is picking up the slack from reading language popularity articles. You can run a survey of projects on StackExchange and Github and find out that JavaScript has the top ranking there. Hacker News put JavaScript in the top three as well with Python and Ruby taking the top spots. Even job rankings on Dice.com show JavaScript in the top three to get a better sense of real world usage. Book sales from O'Reilly put JavaScript at #2 showing a mix of business and hobby use.

The next step for JavaScript would be to revive the server side version of the language and make it as available as php is for Apache. Then the major languages would write APIs to talk to JavaScript and we would have a programming interface for the web. Oh, yeah, and somebody do something in JavaScript to make CSS easier to work with. LESS is a good start.

So, if you are a web person, the future of the web looks like it centers on web application development and JavaScript is taking center stage. APIs will be promulgating and JavaScript/jQuery/minor language support plug-ins will be promoted. One of the APIs that is next on my bucket list is YQL. I think with the first book out on YQL in a few months from O'Reilly, we'll see an interest in mining the data of the web from JavaScript languages.

Excuse me now, I have to get back to the future and read about what's coming so when I start working again I won't be too far behind.

Friday, March 9, 2012

Impressive Shadow debuts at SXSW

Adobe is showing off a great new product for mobile developers at the SXSW conference in Austin this weekend. I caught the announcement and have been using it to achieve a better workflow. It seems the more devices you need to test, the better it works. But I'm just working with an iPad and and Android phone. It's great to sit back and watch all the screens update in real time all at once. and it's still a 1.0 product with many features to come in the future.

Some of the things you start realizing when you leave Shadow open while you browse are that some sites don't do a good mobile design, some require constant authentication, some use AJAX to fake a new page request and which ones have a great sense of adaptive design.

The main feature is a Chrome plug-in that talks to your iOS or Android app so that when you launch your Windows or Mac Shadow application with the apps talking to your local network, the apps will "shadow" what you do on your desktop. The more you like Developer Tools in Chrome the more you will like the product because it shows a webkit Developer Tools based window for your remote device.

The product was a very well timed conjunction of talent from the weinre open source code that was acquired through Adobe's purchase of Nitobi, the maker of PhoneGap and Adobe's BrowserLab. Adobe Shadow is free also and looks to get only better as they add support for Firefox and localhost development environments.

The 1.0 version is posted at the Adobe Labs.This will definitely a product to work with in my new Mobile Web Application Development using jQuery Mobile course here at Centriq.

Wednesday, September 28, 2011

The web’s four hats & an overview of HTML5

I recently returned from speaking at a conference in Kentucky where I ended up putting on four hats and seeing HTML5 and the future of mobile computing differently each time. I was struggling with trying to figure out why standards committees take so long to come to an agreement. After all, the vision of the web is clear, right?

Why did we get off the XHTML superhighway and jump on the scenic route where we ignore the bumps and potholes of bad syntax and limited metadata? What kind of folks are driving the bus right now and are they really going the right direction? Should we stay on the bus while we head into Mobileville? I've decided that the questions you pose are at least if not more important than the answers you think are right. Because that tells you what your vision is and if you've followed a little of this blog, you know that strategic vision is important.

The four hats are:

  • The Engineer - who likes all of the functionality logically thought out
  • The DBA - who likes the the data well defined and stored
  • The Designer - who likes to deliver an emotional message to the customer
  • The Businessman - who makes sure the customer is happy and they make a profit
Those four hats have led me to not watch the standards groups as much as watch Google and Amazon for what to use in the future that people want. The real vision of the web is to make money by giving computer users what they want. Waiting for standards groups won't get you there. After all, the current prediction is that CSS3 will be completely standardized by 2022.  Here's a rundown on the thinking under each hat:


As a software engineer and instructor of many students who become programmers, I know that the web has become populated with these folks that want to turn it into an application environment. That way they can program for it and make it do their will. The back end servers are full of Java, C#, and php and that same force is heading to the browser with JavaScript.

I see the HTML world being altered so that these variable manipulators can have better code and read each other's functions easier. The push in HTML5 towards meaningful naming of the loosely typed datatypes that were divs and spans is going to give programmers a better handle on what it is they are talking to. The code will become less dependent on ids and more dependent on setting up the HTML with the proper element names like <mark>, <time>, <progress>, <meter>, <details> and some of the more argued over names like <style scoped>.

I can see where the types of activities previously handled by JavaScript are being pushed into HTML. This reduces the language down to one, slightly more complex, language. Do we want a merger here of interests? Is the MVC principle being corrupted and should we really mix our text markup with programatic data validation in the same code module? The data and its validation are View Model pieces that are tightly coupled and should be handled together in my mind.

So the future lies with the ultimate merger of HTML and JavaScript and we'll have a Razor style view engine in all browsers eventually. My opinion is not to follow the same syntax and find a better way to merge the two. But the merger will eventually come. Simplified syntax for CSS3 and JavaScript have started to reinvent these stuffy languages and if jQuery can simplify CSS with a replacement like EZ-CSS, sass, or less, then we're on our way. I'm hoping it's not like node.js and stylus, but I need to try those more.


I also love to understand data and its meaningful use. Having the right field name and the right multiplicity gives me a warm glow. But I also know that too much data analysis can torture those poor programming souls who have to recombine the data for the page they work on.

Here we're not talking about the fields where programmers hand over their data after the output is available but where the data can be semantically meaningful as text. These are the domain entities and at a high level they are represented by <header>, <nav>, <footer>, <section>, <aside> and <article>.

My biggest hurdle here is knowing that the data already comes from a semantically proper framework called the database. If all the work has been done there, then who benefits by rethinking the meaningful structure one more time for the web? The only answer is that the DBA traditionally send the data to the programmers who use that "data markup" for their benefit. The field names become bundled together and as I put back on my Engineer hat, I am happy to create complex View Model types for my View, the web page.

But it's reinventing the wheel when there's a database and it's better for team coding experiences to have more readable code. But wait, is there a web service that might be looking for my article elements? Not yet in the HTML5 way. If you are looking for a current semantic framework of entities, you have to look no further than Google's recommendations. The microformats they recommend carry much more weight for reasons to mark up with them than just a standards board. People are using the schemas for reviews, events, organizations, places, restaurants, recipes, etc. without even knowing about it but you are losing out on having Google process your data and provide search results with it if you don't.

The XHTML push that we unceremoniously dumped was a DBA's dream for a perfect interface language between data source and data consumer. The problem was that it's not a homogenous set of data. Our web DBA's don't work under the same domain and the complexity made it uncomfortable to work with so many needs and choices that we found had already been solved with another data model. Or two (relational and objects). We didn't need a third (XML).


The designer in me wants to have my great user interface so well integrated into the data and processes that you don't notice. And I want it to be beautiful. That hat wasn't the one the standards committees were wearing when they proposed the gradients and transforms. Misguided blink spans and easy marquees have been put to death as many times as Freddie and Jason but keep coming back for more. Now we have them again, only this time a little more well dressed. I really don't even like the drop shadows but some of these will look much better to the high-res tablet crowd.

I am happy with SVG support and the graphics we can do, but where is my visual SVG graphics designer integrated with HTML and CSS? Android was the last to jump on the SVG graphics bus and most of the rest of the cheesy effects have partial support so I know I will be trying out Inkscape and SVG Edit when I need.  Or I'll just sit back and let the widgets take over for me like the SVG based Chart control in Visual Studio and jQuery plugins.

I find the most pleasure with web fonts, especially Google's public accessible fonts and having Web Squirrel give me everything I need to do it on my server. I can talk to the device to get info I need to deliver a pleasing layout with the @media queries. And when I need that old-timey newspaper look, there's multi-columns, but I fear it's a retro feature that will be better left alone due to the complexity it creates for the layout.

Surprisingly, I think the designer ended up with some of the best improvements. Colors are better defined both with more names and flexible opacity. The video and audio elements will have to wait for the patents to become less profitable while we find a common standard to work with. The dominance of the iPad will force me to encode a different way but I don't like it.


If you want to know who is going to win the HTML5 battle, you just have to follow the money. And the hat in charge of the money is the businessman. This hat is the hardest to wear since it requires you to not think technically and focus on the customer. Does the customer really want what we think is geeky cool warez? Probably not.

I liked the idea of doing a survey of the currently used names for divs and classes in the wild in order to understand how people used HTML. But what were we supposed to do with that? Did I really want a better element because I was really happy with just calling my div a wrapper in the id and not a section. You want me to recode it? When?

What helps me make money? Coding faster. Reusing other people's code. If all you do is rename things, then it's going to get in my way. Now if you make it easier to find elements in CSS by adding better selectors, I'm not going to complain but it is looking more and more like a grep or regex language these days.

I can reuse other people's code if they've marked up what I want to use and now we're back to microformats because what I want is not generic text groupings that are or aren't related. What I want are finer granulation on the most common business entities which is exactly what the microformats do.

Other types of useful abstraction that will be faster or give me critical information are the new communcations APIs like Web Workers or Web Sockets and the ability to let me work offline with session or local storage when paired with the page caching. Geolocation is just a one-trick pony but had to happen when you start moving yourself around instead of sitting at the same IP day after day. And the database storage on the client will improve the user experience for offline as well.


People want a desktop application experience on the web and want it whenever they want it there. People have to have it easy and that means the developers as well as the users. So will I use the semantic markup? Probably. It will help me as a programmer. Will I use the <canvas> and other graphic elements? Probably not. I like SVG and JavaScript generated graphics as long as I don't have to write it. I can use Adobe Edge or other visual programs for that. Will I use the features that aren't well supported like drag & drop and datepickers where I can use jQuery and jQuery mobile to get standardization now? Obviously not.

The 2-D and 3-D graphics are incredible and not in my realm as a web developer but if you are a game developer that wants to leverage the e-commerce platforms of the future like the Apple App Store, Android Marketplace or possibly the Amazon juggernaut commerce system, then you must develop using <canvas> because it's the fastest and standardized.

As to why standards groups like the W3C and WHATWG don't move too fast, it's because you have four different groups of people all lobbying for their view. The DBA had the upper hand in the last version and now the designer and the engineer seem to be dominating. But like a real business, it's the dialog that is important to crafting a best solution for the whole. Let one company or one hat take over and it would be an impending fail.

The businessman is going to take you to that application experience that you so want. And he will charge you for it and pay you to develop great apps for it. The standards committees don't think like businessmen so you'll need to see what people are really using, really need, and get paid for. My best bet is to watch Google for how they use search data and to watch Amazon for how they will use their e-commerce platform. Of course, you'll also be watching Apple and Facebook as app delivery leaders. Then code for it and enjoy.