Sunday 24 February 2019

Improving the Accuracy of Software Development Estimates

In a previous post I discussed a useful analogy for Software development estimates and as I normally do, I posted it to Reddit, where it generated a bit of discussion.  Among the comments there was this by ghostfacedcoder (emphasis mine)

These articles always frustrate me, because they clearly come from people in (comparatively) bad companies who think idiotic things about estimates, and as a result the authors need to write arguments against that idiotic thinking.
But personally I've escaped those bad companies, and I could care less about reading more whining about estimates (NOT blaming OP: again people write about what's relevant to them ... it's just that that isn't relevant to me).
I'd like to see more articles about getting estimates more accurate, or organizing projects better around inaccurate estimates. You know, the stuff that's a lot harder to do than complain (although, again, I'm not faulting anyone for complaining, just saying what I'd like to read). But in my experience such articles are far more rare, and I guess that says something about the state of our industry.
This is an attempt to write an article about producing more accurate estimates, here is to you ghostfacedcoder.

Project Management Triangle


In project management and I'm not just referring to IT projects, there is a well known concept called the project management triangle, aka Scope Triangle, aka Triple Constraint. I was introduced to it at the local cobblers, who had a humorous sign on the wall which read the following (rough translation from the original):

A quick job well done won't be cheap
A cheap job well done won't be quick
A cheap job quickly done won't be good.

Producing an estimate is no different and thus the quality of the estimate depends on the amount of resources spent on it. 

It is true that there will always be spanners in the works, but this intuitively makes sense, after all if the project manager asks a developer for an estimate and hovers around waiting for said estimate, this gives little or no time to the developer to think about the potential issues that might arise from an off the top of his/her head design for a feature, which is likely to result in missed issues or an estimation for an approach that will not actually work.

So how can we do better?  The answer is to spend more resources (time and/or people) on coming up with the estimates.

Caveat emptor 


Firstly, I will put my hand up and say that this method that I am proposing is entirely theoretical as the three times I used it to come up with an estimate, the feature ended up never being developed so actual real world data is needed.

Secondly, coming up with the estimate took around 20% of the estimated time to generate said estimate, but as discussed later, some of this time would be saved from the development phase. This was for features that were estimated at around one person week.

Thirdly, for any feature of reasonable complexity it simply isn't possible to be 100% accurate with an estimate all the time.  Not even doing the feature and then re-doing it will give you an 100% accurate estimate all the time as on the second time around you'd probably apply what you've learnt the first time around and do it differently, which would likely lead to a different duration.

Fourthly, I realise that the resource expenditure might not always warrant the increase in accuracy but I think it can be useful in situations where there is a high risk to the business or trust needs to be repaired to give but some examples.

Finally, I think that the more familiar the team becomes with a codebase and the domain, the less time consuming this method will be.

There is Method in the Madness


There are various sources that contribute to uncertainty in an estimate but they can all be distilled down to one thing: Assumptions. What this method does is minimises the assumptions by spending time doing some coding.

This is not doing the work to then tell the PM how long it will take to do (having already done the work) but it is doing some coding to ensure that the assumptions that we make in our estimate are reasonable.

  1. Create a feature branch.

  2. The idea is to have a separate branch to enable pseudo development.

  3. Add failing unit tests.

  4. If you don't use TDD you can ignore this step.

  5. Add Interfaces/Classes with empty methods that have the right signatures (models included) so that your project compiles successfully.

  6. The whole idea is predicated on this step as it forces the developer to think about the design of the feature.

    It is very important that the data models are added here as getting the data needed might be half battle.

  7. Estimate how long it will take to write each method, writing down each and every assumption you make as a comment on each method and/or class/interface. 

  8. Make sure that your actual time estimate for each method is not written down in the code to avoid any possible anchoring effect.

  9. Submit PR to enable somebody to validate your assumptions.

  10. This step is not necessary but it's another gate in the process to increase accuracy as a second pair of eyes might see different things, e.g. missing methods, unnecessary methods, entirely wrong approach etc..

  11. PR Reviewer validates assumption and also estimates development time

  12. A average or even a weighted average is taken of both estimates and this then becomes the team's official estimate. Let the PM have the bad news.

The method also provides a good base for helping out junior developers on more complicated features as most of the design work has already been done.

Projects Bid need not apply


While I think that this methodology works for estimating a feature to a great degree of accuracy it would not work very well for a bid.  The upfront cost of this methodology is so high that I would be surprised that anybody would go along with it.


Saturday 23 February 2019

The proper way to restart a Windows Server

From an elevated permissions PowerShell console:

    Restart-Computer -Force

This will avoid accidentally shutting down an unresponsive server while attempting to rebooting by hammering keys in despair at said unresponsiveness. Not that anybody would do that, certainly not me.

Sunday 20 January 2019

Software Development Estimates, An Analogy

In my previous role there was little understanding by the business of what software development actually entailed and perhaps the biggest source of tension were estimates, which were completely and utterly misunderstood.

An estimate was taken to be a firm guarantee of delivery and thus any delays in hitting the expected delivery date was taken as a failure. 

I tried different things to try to explain the intricacies of estimates:

  • The Dictionary Approach

  • Essentially, remind the business of the dictionary definition of an estimate, namely: An approximate calculation or judgement of the value, number, quantity, or extent of something.

  • The Probabilistic or Confidence Level Approach

  • Provide a probability associated with the estimate, e.g. I have 75% certainty that this feature will be completed within 5 days and 99% certainty that it will be completed within 8 days.

  • The Error Bar Approach

  • Provide the estimate with a error level to take into account estimation errors, e.g. This feature will take 5 ± 3 days to complete.

The final approach, which worked reasonably well, was The Analogy Approach, but first a little story:

About a year into the job, we had a meeting in Birmingham, for which Google Maps estimated a travel time of 1 hour and 10 minutes, however it took 2 hours. 

There had been an accident leaving the office so traffic was slow, we were stuck behind a tractor for a few miles and then there was more slow running on the motorway.

A few weeks later, we had another meeting in the Birmingham area, Google Maps estimated a driving time of 1 hour and 5 minutes and this time we made it in just over one hour.

So The Analogy Approach is to use an analogy easily understood by the business namely:

Software development estimates are like the travel time on Google Maps.

This works really well as an analogy as heavier than normal traffic (problem is harder to solve the anticipated), accidents (laptop has decided to give up the ghost), being stuck behind a slow moving vehicle (waiting for another developer to complete his/her part), rerouting (having to use a different approach to solve the problem) are all unplanned and unforeseen delays that affect travel time in the same way that development estimates are hit by unplanned and unforeseen issues.

I would like to say that my analogy solved the issues once and for all and that I was never challenged on an inaccurate estimate, but I would be lying. What I can say was that I had a clear and easily understandable analogy to remind the business when challenged on another inaccurate estimate.

Monday 8 October 2018

Using Dynamic 365 Service Endpoints - Part 2

In the previous post, we discussed using Service Endpoints and how they can be used to integrate with other systems via a Azure Service Bus Queue.

There are other ways that Service Points can be used for integrating with other systems.

  1. Topic

  2. Integrating with a Topic is very similar to integrating with a Queue, except that multiple systems can consume the same message.

    Let's say we wanted to integrate with our customer's Audit service and a third party system, then we would use a Topic to ensure that both systems would get the messages

  3. One-Way Listener

  4. This requires an active listener.If there is no active listener on an endpoint, the post to the service bus fails. Dynamics 365 will retry the post in exponentially larger and larger time spans until the asynchronous system job that is posting the request is eventually aborted and its status is set to Failed

    The utility of this is beyond me given that the operation will always succeed in Dynamics due to endpoint steps requiring to be asynchronous. I might be misunderstanding how this works though.

    A sample can be found here

  5. Two-Way Listener

  6. A two-way contract is similar to a one-way contract except that a string value can be returned from the listener to the plug-in or custom workflow activity that initiated the post.

    This seems like a more interesting proposition as the other system can send information back to Dynamics 365, e.g. return the result of a calculation

    A sample can be found here

  7. REST Listener

  8. A REST contract is similar to a two-way contract on a REST endpoint.

    A sample can be found here

  9. Event Hub

  10. This contract type applies to Azure Event Hub solutions.




Friday 5 October 2018

Using Dynamic 365 Service Endpoints - Part 1

Dynamics 365 offers various possibilities for integrating with 3rd party systems, one of them is using a Service Endpoint, which is a fancy way of calling the Azure Service Bus, which can be used for integrating with 3rd party systems.

In reality things are a bit more complicated and will be discussed in a future post.

In this example we will create a Service Bus queue that will receive any new Search Engines so that these can be processed by a single 3rd party system.


1. Create Azure Service Bus Queue

From the Azure Portal
  1. Create a Namespace



  2. Create a Queue



  3. Add SAS Policy



2. Register Service Endpoint

In order to register a Service Endpoint we will need the connection details for the Queue, which can be obtained from the Azure Portal.

  1. Register New Endpoint



  2. Add Connection Details



  3. Complete Registration



  4. The Message Format is important as the code needed to read the messages will be different depending on the format.








  5. Register New Step


  6. Bear in mind that it needs to be registered against the service endpoint itself.



    This is what we've ended up with

3. Test

We create a new Search Engine



We can see that the message has been sent on Azure Portal



4. Processing Queue Messages

The code used to process the Queue messages can be found here and the full VS Solution can be found here.

Some of the code has been pilfered from the CRM Samples and updated it to work with the latest version, at the time of writing of Azure Service Bus.

The verbosity of the messages is peculiar and it would be nice to be able to do something similar to plug-in  (Pre/Pro)EntityImages, namely just send a few parameters.

In this highly contrived exampled we might just need to send two parameters (name and url) to our 3rd party system, yet ~ 5 KB of data are sent.

Thursday 4 October 2018

Dynamics 365 New Features - Alternate Keys

I will confess that these are new features for me, so if you happen to have left the Dynamics CRM world at the same time as me and are coming back to it now this post will be super useful otherwise, well not so much maybe

Alternate Keys

With alternate keys, you can assure an efficient and accurate way of integrating data into Microsoft Dynamics 365 from external systems. It’s especially important in cases when an external system doesn’t store the Dynamics 365 record IDs (GUIDs) that uniquely identify records. The alternate keys are not GUIDs and you can use them to uniquely identify the Dynamics 365 records. You must give an alternate key a unique name. You can use one or more entity fields to define the key. For example, to identify an account record with an alternate key, you can use the account name and the account number. You can define alternate keys in the Dynamics 365 web application without writing code, or you can define them programmatically. Note that while you can define the alternate keys in the user interface (UI), they can only be used programmatically, in code.
An entity can have up to 5 alternate keys and for each one a new index will be created, this is done a as background job, so there will be an associated decrease in insert performance, whether this will be noticeable it's hard to say.



This allows us to write code like this, see below, to change the account name, the assumption here is that this account, 1234, is coming from another system and in this system it's using integer keys.

For the record, alternate keys allow the following types

Decimal Number
Whole Number
Single Line of Text

Code:

using (CrmServiceClient client = new CrmServiceClient(ConfigurationManager.ConnectionStrings["Inc"].ConnectionString))
            {
                try
                {
                    Entity account = new Entity("account", "accountnumber", 1234);
                    account["name"] = "Changing Name";
                    client.Update(account);
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex);
                }
            }



Wednesday 3 October 2018

Serve Javascript and source map files locally using Fiddler - part 2

In the previous post in this two part series we showed how to use Fiddler to serve client side content from our local machine.

This worked fine but could become pretty tedious so in this post I describe a different way.

A single rule like this will cover all eventualities but we need to make certain changes to our project



There is an issue here though.

If we create a new library through the front end, it will be called:
 <prefix>_<name> 
but we would like a rule like this:

 <prefix>_/<prefix>_<name> 

The reason for this is that this allows our rule to work for only our custom libraries

Without  <prefix>_/ our rule would match all manner of web resources which effectively would prevent the system from working.

The solution is to programmatically create the web resources so that the naming convention that we want to use is respected.

In this example the spkl framework have been used to do this, see spkl.json for details and the exported Dynamics solution



This is the corresponding url:

This enables us to have one rule to ring them all and in the darkness bind them or something like that :)

Tuesday 2 October 2018

Serve Javascript and source map files locally using Fiddler - part 1

One of the major downsides of developing on Dynamics 365 is that the feedback loop can be pretty long.

Say if we wanted to test a change to an existing Javascript library this is what we would need to do:

  1. Make code changes in Visual Studio
  2. Upload to Dynamics Server
  3. Publish
  4. Test

2 and 3 can be time consuming, specially on slow networks or heavily used servers, however there is another way, enter Fiddler.

Fiddler is a Web debugging Proxy that can be used to serve files.

In this example, I have create an extremely simple webresource that contains a single function that provides an annoying greeting every time a account record is opened.

Dynamics solution can be found here and script (in typescript) can be found here (ignore the method name for now)

We can now use Fiddler to serve the files from disk

  1. Launch Fiddler
  2. Select AutoResponder Tab
  3. Create AutoResponder Rules as shown


I've create a single rule for each file

account.js
account.js.map
account.ts

This will allow us to debug our typescript code



Note that once the library has been registered with the form, it's possible to add methods on the source code without uploading them to the server as long as you serve them from Fiddler.

Let's say we wanted a function called  setCreditLimit, which sets the credit limit of the account to say $10000, we could do the following in Visual Studio and register the appropriate handler in the form without having to actually upload the changed file.

If we are doing piecemeal changes like this the effect is not great as we still need to publish the form, but you could register all the events needed and then work at your leisure from Visual Studio.

Having to add rules can get a little bit tedious so in another post we will showcase a different way that will allow to set up a single rule


Don't Forget to upload the web resources once you've finished.

Monday 1 October 2018

Developing and Testing Plug-ins in Dynamics 365 - Part 4 - Debugging with the Plugin Profiler

If we were using a on-premises instances and had access to the Dynamics 365 server then we would be able to have a remote debugger running on the server and step through the plug-in code at our leisure. However, Microsoft is quite keen to move everybody on to the cloud thus we need another way. Enter Plugin profiler.

The plugin profiler is solution that can be installed on any Dynamics 365 instance and can be used to generate plug-in traces that can then be used to step through the plug-in code.

The key difference is that the stepping through the code is done after the plug-in has run and not in real time. In short, the sequence is as follows:


  1. Trigger Plug-in (e.g. create account ...)
  2. Save Plug-in trace
  3. Load Plug-in trace with Plugin Registration Tool
  4. Step Through plug-in code.

Pre-Requisites
  • Plugin Registration Tool installed (see this post for more details)
  • Admin access to Dynamics 365 instance
  • Visual Studio 2017
Debugging Online Plug-ins
  1. Start Plugin Registration Tool
  2. Click Install Profiler
  3. Select the step to be profiled
  4. Click Start Profiling


  5. Go with the defaults


  6. Invoke Step (Create account in this case)
  7. On Visual Studio, Click Debug | Attach To Process | PluginRegistrationTool.exe


  8. Click Debug


  9. Select Assembly Location and Relevant Plugin
  10. Click on Highlighted arrow and select profile


  11. Place a break point on Visual Studio
  12. Click Start Execution
  13. The break point will be hit and we can step through the code at our leisure



Friday 28 September 2018

Developing and Testing Plug-ins in Dynamics 365 - Part 3 - DataMigration Utility

In the last part of this series we looked at using an option set (drop down) to store the list of relevant search engines, it turns out that our customer wants to have both the name of the Search  Engine and its URL.

1. Using a Lookup instead

We could add two fields but that's a bit clunky so we will add a new Entity called Search Engine and a new look up from the account form.

The entity has two fields:

Name (new_name)
URL (new_url)

I have also added a new 1:N relationship to the account entity

Search Engine (new_searchengineid);

The Dynamics 365 solution can be found here.

The plug-in has been modified like this

We have landed ourselves in some trouble when it comes to unit testing again with this code as GetSearchEngine depends on the IOrganizationService, which we can't mock as discussed previously.

I've refactored the code here to remove this dependency and with unit tests.

2. Test induced Damage

Let's say that we really wanted to use mocks, so I have refactored again to allow this

The solution is here, with account plug-in and test

In order to be able to mock the data (Search Engines) I have created a ISearchEngineRetriever that has the GetSearchEngines method and inject this interface on a new class SearchEngineFinder that has our old trusty GetCorrespondantSearchEngine method.

This allows to mock away the GetSearchEngines method.

This is silly, don't do it.

You could argue that this would allow you to, in the future, inject a different ISearchEngineRetriever  if you wanted to get the data from somewhere else, and that would be true but why worry about that eventuality when it might never happen and if it does happen it's unlikely to happen in the way you've anticipated.

If you do know that this data will come from another source in the future, then may be something along these lines would be reasonable, maybe.

3. Data

There is a problem in the approach that we have taken, namely adding a new entity as we now need to have that data (Search Engines) in the production environment (as well as test, etc..)

Luckily we have tools, see this post for more details

We will use the Data Migration Utility to export from our dev environment into our other environments

  1. Fire up Data Migration Utility (DataMigrationUtility.exe)
  2. Select Create Schema
  3. We will need to Log In
  4. Select Solution
  5. Select Entity
  6. Add Fields to be exported, new_name and new_url have been selected here.
  7. Click Save and Export
  8. Select appropriate files for the schema and data
  9. Finished !!!

The advantage of this method is that the Ids (Guids) of the entities will be preserved across environments, which means that Workflows will continue to work seamlessly, more on this here.

A Guid has 2 ^ 122 possibilities (there are six bits that are reserved) so it's extremely unlikely that duplicate Guids will happen.

To import
  1. Fire up Data Migration Utility (DataMigrationUtility.exe)
  2. Select Import Data
  3. Select data file and Click Import Data