Detailed Specifications

The Detailed Specifications are the final output of the requirements decomposition. These are the requirements that will be used to build out the infrastructure, populate the configurations, and drive the development of software where necessary.  Based on our previous example from our last discussion, one of our High Level Requirements was “The system shall send text messages to Agents”. This single high level requirement should be further decomposed into the elements which will be designed/created to fulfill the requirement:

 
FR
1.2
The system shall send text messages to Agents
FR
1.2.1
Text messages shall not exceed 140 characters
FR
1.2.2
Text messages shall be stored 120 days
FR
1.2.2.1
Text messages older than 120 days shall be archived
FR
1.2.2.1.1
Archived messages shall be retrievable
FR
1.2.2.1.2
Archived messages shall be retrieved within 15 minutes
FR
1.2.2.1.3
Archived messages shall be stored on Linear Tape File System (LTFS)
 
And so on until the system has been sufficiently described to know how it will be built. There is a whole science behind how the requirements nesting is done, which I don’t fully understand but the requirements management tools mentioned last week help to facilitate this very well. The proper nesting will prevent missed specifications and duplicate specifications as well as providing guidance for testing.
 
Even a simple system can generate hundreds of specifications. I have worked with several teams in the past where we spent multiple weeks churning these out in order to define the systems that we were going to build. It can be a painstaking process but when done properly the resulting build/test/deploy cycle can be completed with confidence that the system will meet the business need.
 
As shown in the previous step, the Detailed Specifications can have many nested layers, each of which is traceable to the layer preceding it.
 
The Detailed Specifications are the requirements which are used for testing to ensure that the system does what it was designed to do. Once the Detailed Specifications are complete the engineer participation in the process drops off significantly.
 
There are artifacts that can be useful for describing the system separate from the requirements. However, these should all be based on the requirements. Examples of these are:
-          Logical Component Diagrams
-          Physical Component Diagrams
-          Process Flow/Swim Lane Diagrams
-          Rack Elevation Diagrams
-          Network cable/port Diagrams
-          Value Stream Maps
-          Performance Metrics
-          Bill of Materials
-          TCO/ROI Analysis
-          Interface Control Document
-          Database Design Document
-          Systems Design Document
-          Systems Test Plan
-          Systems Test Cases
-          Implementation Plan
 
Most of these have some representation already in the PQR process which we get into when we start to discuss the Engineering Artifacts.
 
The Resource Feasibility principle should be a huge consideration during this step and the build step. It may become obvious earlier that there won’t be enough time, money, personnel, or other resource factor to complete the design but this time period is where that determination becomes critical. 

High-Level Requirements Specifications

High-Level Requirements/Specifications

 
FR/NFR are further decomposed into High-Level Requirements or Specifications. “The system shall send electronic mail to agents” would be decomposed into:
-          The system shall allow the Post Office Protocol (POP) for sending e-mail.
-          The system shall allow the Post Office Protocol (POP) for receiving e-mail.
-          Etc…
 
Note that there is a separate specification for sending and receiving. Each requirement or specification should be able to stand on its own. This is to ensure two things:
 
a.       The requirement can be properly tested.
b.      The requirement can be modified individually in case it either cannot be met or is determined to be unnecessary.
 
As high level requirements are decomposed into functional requirements and specifications there is typically a hierarchical matrix created which allows each item to be traced back to its parent. For example:
 
Type
Identifier
Requirement
BR
1
The system shall send messages to Agents
FR
1.1
The system shall send electronic mail to Agents
FR
1.2
The system shall send text messages to Agents
FR
1.3
The system shall identify Agents individually
FR
1.4
The system shall identify agents by agency
FR
1.1.1
The system shall support Post Office Protocol (POP) for sending e-mail
FR
1.1.2
The system shall support Post Office Protocol (POP) for receiving e-mail
 
This allows us to clearly trace FR 1.1.1 to FR 1.1 which is then traceable to BR 1. Using this technique ensures that all requirements can be traced back to the business need. These requirements are used to build test cases later in the systems life cycle.
 
The Optimality and Design Criterion Principles are foundational to this step and the next. It will most likely be determined at this point what the solution looks like. If the solution is determined prior to this point there is a greater chance of the design being driven by the capabilities of the solution instead of the desired capabilities of the design. In other words, we should not base our solutions on what we have already purchased.

Continuous Engineering - Continue Design and Automation all the way to Production

One of the terms that get tossed out a lot in discussions around Continuous Engineering is DevOps. That is because when an organization is effectively using DevOps methods and tools it is by default practicing Continuous Engineering. Unfortunately, DevOps also tends to mean different things to different people. There is a fairly concise set of properties that determine the level of success for an organization adopting DevOps.  Of course, since it is an IT methodology there is an acronym for it, CALMS.

·         Culture
·         Automation
·         Lean
·         Measured
·         Sharing
 
Culture
I’ve always liked this quote. It came from CEB DevOps promotional materials.
“DevOps is ultimately not about the organizational structure, but about the ability to break out of siloed mindsets and build stronger mechanisms for Infrastructure-Applications collaboration.”
That sums up the culture aspect very well. It doesn’t matter what your role is, who your supervisor is, or where you land in the pecking order. What does matter is that we think and act as “A TEAM” as opposed to part of a team.
I was on a support call recently and I heard a comment that went something like this:
§  What’s wrong with your software now?
When I heard that I cringed a little because that is the polar opposite of where we need to be as an organization. It is not yours, mine or theirs, it is ours. 
Automation
 
I am a huge proponent for Automation. I have always leaned toward automated environments. A quick example was when we first implemented a SAN at CJIS. It was one of the early LSI Logic with Brocade configurations. It was an extremely complex system for data storage with full redundancy as well as secondary SAN to which all of the data was replicated. It was some very important data. As an early adopter there weren’t a lot of tools available to us for monitoring and managing at the level we desired. We would take 4 different reports from the switches and controllers, combine the data in a spreadsheet, manipulate it, and produce another report that showed us exactly where our data was in the SAN, and where the hot spots were during operations. It took about 2 hours/day to produce the report. Using Perl and some shell scripts I was able to get the data, manipulate the data, produce the report, and provide a rudimentary search of the final output for the information we required. It took me about 3 weeks to write and test and consumed most of my day (or evening really, I was on second shift at the time). The time it took to produce the desired results when completed was between 10-15 minutes. We used that tool, with modifications, for 4 years every day (automatically of course) until we implemented an Enterprise SAN using EMC.
 
The point I want to make is that one of the reasons you may not have time to automate your repetitive tasks can be more than overcome by just doing it. The more successful DevOps organizations actually have their entire deployment chain automated. Every step along the way has been automated to the point that once code is committed it can be promoted to Production in a few hours or less and the releases are continuous, not periodic as we operate today. We have some of the technology necessary to make this work in the organization; it just needs to be more widely used. We will introduce some of these tools in later discussions.
 
Lean
 
Minimum Viable Product.  This is one of my favorite terms within DevOps and Design Thinking. The implication is that the functionality provided by a release should only be as much as is needed to provide the desired/required business value. Maybe someday I’ll address what that (Business Value) really means, in the meantime there is a very good book by Mark Schwartz called “The Art of Business Value” that explains that term from multiple viewpoints. I highly recommend reading it for anyone that wants to understand "busness value". That’s not all from Lean methodologies that we could be applying to our systems. We will also discuss this more in future posts.
Measured
 
In the “Agile Manifesto,” the first principle states “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” There are many metrics which can be used to determine what “valuable” really means. What key metrics do we use to determine that today? For the most part it comes down to the user experience. How many quotes are completed? How many bills are paid on-line with no issue? How fast is it to pay that bill? Etc…
 
Part of that value includes the need to measure ourselves. Closely tracking defects, doing it early in the process, and keeping track of how long it took to correct them are just a few metrics which can be applied to gauge the value of IT as a service to the company. The introduction of ServiceNow should help a great deal in this regard. It is easier and more open (with the right permissions) to drill into the data being collected and create reports, trends, and other visual data displays. We have depended on primarily subjective data in the past to show improvement (or lack of it) and we now have a method to show improvement based on objective data.
 
Sharing
 
We were all taught to share as children. Some are better at it than others. The sharing aspect of DevOps is very broad. It includes sharing people, data, configurations, processes, resources, roles, documentation, and most importantly information. One of the themes that our senior leadership has been pushing is the broadening of roles within IT. Allowing individuals to take on some responsibility beyond what might be considered part of their defined role. This plays very well in the DevOps methodology since the idea is to have resources available that can handle multiple types of tasks at different times. In a fast pace environment we don’t want to be waiting on a single individual because they are the only one that can perform the work. Ideally there would be several interested individuals that seek out those skills and can be made available as needed.
 
In a world where automation is the norm, broadening becomes easier to come by. That is because automation setup is very often a self-documenting process. The configuration files and scripts become the source of the documentation. Anyone with the proper permissions can review that information and either re-use it or modify it without much difficulty. This type of environment also prompts greater sharing among the team(s).
 
The outcomes provided by the DevOps methodology have been proven across the industry to be of higher quality, faster speed to market, lower cost, and easier to maintain. It is a huge cultural shift to implement but well worth the effort. We can all expect to see more around this topic in the not too distant future.
 
Next week we will discuss Requirements Management.

Continuous Engineering - Test Driven Development (TDD)

as anyone ever asked you one of the following questions?

1)      How do we move quality left / have fewer defects?
2)      How can we move faster / optimize our processes?
3)      Where is your unit test plan and what were the results?
 
The answer is probably “yes”, especially if you’re an engineer. I’ve performed both engineering and non-engineering roles in my IT career, so I know how it feels to be on both sides of the coin – as a Project Manager or Analyst, I often found myself asking the question, “Did my engineer even test this?” And as an engineer, I found myself saying, “It doesn’t matter how much testing I do, or how much documentation I write, I always miss something!”
 
But there are ways to improve the engineering world of unit test – one of them is Test Driven Development (TDD).
 
Benefits
Before I dive into the details of how TDD works, here’s a sales pitch. Why should anyone care about TDD? If done properly, TDD can positively influence many metrics we care about:
1)      Increased Quality
2)      Decreased Time to Market
3)      Decreased cost of ownership
4)      Balancing technical debt with refactoring
 
My first assignment here at The ERIE was to dynamically generate an e-mail detailing specific changes to commercial policies out of our C-LION system. One of my personal goals was to explore TDD and test automation, and gather detailed data around what value it delivered – measured in real hours of effort and number of defects.
 
Here were the stats on defects (combined across our assembly testing and the QA Test Pass 1):
1)      Requirements Coverage: This piece of code was responsible for fulfilling 14% of the requirements on the project
2)      Defect Generation: This piece of code was responsible for 4% of the defects. Of these, all were identified during assembly testing, and none were found during QA Test Pass 1. A side benefit was that the quality of the code was so high, that QA was able to reduce their planned scripts in Pass 2 by 20 scripts! That’s moving quality left.
 
From a cost perspective, I originally estimated that I would write 158 Tests at an estimated 90 minutes of coding time per test (that’s 237 hours of effort). My actual effort was 339 Tests at 15 minutes per test (85 hours of effort).
 
Conclusion: TDD is faster to do than I expected and delivers higher quality than other pieces of code that did not utilize TDD. As a side benefit, the tests were repeatable, providing a good safety net for refactoring and enhancements in the future.
 
Are you sold yet? I was.
 
Process & Methodology
The TDD process is deceptively simple. Here’s an outline of the process (pulled from Wikipedia):
·         Write a test case
·         Confirm new test is failing
·         Write “minimal” code to pass the test
·         Confirm new test is passing
·         Refactor code
·         Rerun all tests and confirm passing
·         Repeat!
 
At first, it seems silly to write a failing test case FIRST. I remember thinking, “Of course it’s going to fail; the code’s not written yet!” But the value isn’t actually in writing a failing test first – instead, it’s about designing code that is, in fact, easily testable. This can be a difficult mindset shift for engineers (it was for me!), especially when unit tests can sometimes be an afterthought in our Tech Designs.
 
What I learned was that I could more easily identify flaws that I had not anticipated in my design. I also was able to quickly refactor code when I needed to and confirm that the code was still giving me the expected output. Lastly, I had immediate feedback on most defects that I accidentally introduced into the code. It wasn’t fool-proof, but it definitely helped. The increased quality and time reductions were just a natural consequence of the process.
 
There were a few drawbacks. First, the code tended to became more modular, which increased potential for reuse but also increased complexity. Second, the mindset shift produced by TDD was so drastic that it was difficult to explain why I had written the code in a particular way – it had to be testable, and that impacted the design. Finally, it was easy to get carried away with testing every possible combination of use case; a balance needed to be found between test coverage and the risk/impact of potential defects.
 
Available Frameworks & Tools
There is a plethora of TDD tools available today for use. Often times the tool has to vary for the technology you’re working with (JUnit for Java, Cucumber for UI, etc), or you can build your own framework when one isn’t readily available.
 
If you’re interested in learning more about TDD, a quick search on the internet is all you need. For hands-on experience, talk to your Tech Lead or SLE about shadowing another team that’s doing TDD today to see how it’s done, or talk to your supervisor about attending some training. The investment is small, and the potential payout is high.
 
I hope TDD can give you some of the answers to those tough questions about quality and speed. Happy testing!
 
I want to thank Tim Weindorf for taking the time to author this post. If anyone in the organization would like to follow in his footsteps just contact me and we'll discuss making that happen.
 
Next time we will discuss design and automation all the way to Production.

Books on Statistical Analysis

Booz-Allen-Hamilton, 2013, The Field Guide to Data Science, http://www.

boozallen.com/media/file/The-Field-Guide-to-Data-Science.pdf.


Hubbard DW (2010) How to Measure Anything: Finding the Value of

“Intangibles” in Business, 2nd ed. (John Wiley & Sons, Hoboken, NJ).


Hillier F, Hillier M (2010) Introduction to Management Science: A Modeling and

Case Study Approach, 4th ed. (McGraw-Hill Higher Education, New York).


Vose D (2008) Risk Analysis: A Quantitative Guide, 3rd ed. (John Wiley & Sons,

Chichester, UK).



Big data: The next frontier for innovation, competition, and productivity, a

McKinsey & Company report. http://www.mckinsey.com/insights/business_

technology/big_data_the_next_frontier_for_innovation.




FURTHER READING

Berry MJA, Linoff GS (1999) Mastering Data Mining: The Art and Science of

Customer Relationship Management (Wiley, New York).

Clemen RT (1997) Making Hard Decisions: An Introduction to Decision, 2nd ed.

(Duxbury Press, Pacific Grove, CA).

Few S (2012) Show Me the Numbers: Designing Tables and Graphs to Enlighten,

2nd ed. (Analytics Press, Burlingame, CA).

Hand DJ, Mannila H, Smyth P (2001) Principles of Data Mining (MIT Press,

Boston).

Hillier FS, Lieberman GJ (2005) Introduction to Operations Research, 8th ed.

(McGraw-Hill, New York).

Law AM, Kelton DW (2000) Simulation Modeling and Analysis, 3rd ed. (McGraw-

Hill, New York).

Ross SM (2010) Introductory Statistics, 3rd ed. (Academic Press, Burlington, MA).

Siegel E (2013) Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie,

or Die (Wiley, New York).

Tufte ER (2001) The Visual Display of Quantitative Information, 2nd ed. (Graphics

Press, Cheshire, CT).


documents

Accident Year Underwriting Profit Loss


Accident Year Ending Direct Earned Premium Developed Loss Including ULAE Ultimate Loss Ratio Historic Expense Underwriting Profit/Loss (Dollars $) UNDERWRITING
PROFIT/LOSS
(PERCENT %)
Return on Allocated Surplus (% of Surplus)
Coverage X1 12/31/2012 $2,595,019 $577,472 22.3% $903,067 $1,114,480 42.9% 82.1% Calculated per state per lob
12/31/2013 $2,830,650 $928,443 32.8% $1,050,171 $852,036 30.1% 61.4%
12/31/2014 $3,268,412 $2,096,227 64.1% $1,189,702 -$17,517 -0.5% 12.1%
12/31/2015 $3,833,694 $1,048,872 27.4% $1,326,458 $1,458,364 38.0% 74.2%
12/31/2016 $4,090,153 $947,264 23.2% $1,345,660 $1,797,229 43.9% 83.7%
Total $16,617,928 $5,598,278 33.7% $5,815,058 $5,204,592 31.3% 63.4%
Coverage X2 12/31/2012 $5,516,452 $6,650,128 120.6% $1,919,725 -$3,053,401 -55.4% -32.6%
12/31/2013 $6,050,235 $3,715,295 61.4% $2,244,637 $90,303 1.5% 6.3%
12/31/2014 $7,033,216 $6,754,950 96.0% $2,560,091 -$2,281,825 -32.4% -16.9%
12/31/2015 $8,130,777 $2,879,208 35.4% $2,813,249 $2,438,320 30.0% 25.8%
12/31/2016 $8,469,447 $5,144,821 60.7% $2,786,448 $538,178 6.4% 9.6%
Total $35,200,127 $25,144,402 71.4% $12,324,150 -$2,268,425 -6.4% 0.9%
Totals 12/31/12 $10,704,436 $9,175,796 85.7% $3,725,144 -$2,196,504 -20.5% -10.4%
12/31/13 $11,690,526 $6,218,654 53.2% $4,337,185 $1,134,687 9.7% 14.2%
12/31/14 $13,494,174 $11,074,962 82.1% $4,911,880 -$2,492,668 -18.5% -8.7%
12/31/15 $16,087,153 $5,208,060 32.4% $5,566,155 $5,312,938 33.0% 33.2%
12/31/16 $17,616,631 $7,486,287 42.5% $5,795,871 $4,334,473 24.6% 26.4%
TOTAL $69,592,920 $39,163,759 56.3% $24,336,235 $6,092,926 8.8% 13.5%

link to study

 

 

https://www.springboard.com/workshops/data-science-intensive-course?utm_source=quora&utm_medium=cpc&utm_campaign=ds2_leadgen_20161026&utm_term=us_ca&utm_content=ad_1

 

 

http://freestatistics.altervista.org/?p=learning

 

https://www.openintro.org/stat/

 

 

https://www.quora.com/What-are-some-good-resources-for-learning-about-statistical-analysis

 

 

 

one new term I learnt Biz Stats - http://www.bizstats.com

 

Harvard classes

 

http://www.eng.utah.edu/~cs5961/

 

http://www.stat.cmu.edu/~cshalizi/uADA/15/

 

http://www.stat.cmu.edu/~cshalizi/350/

 

https://www.khanacademy.org/#statistics

 

Statistics

http://scpd.stanford.edu/courses/statistics-courses.jsp

 

Carnegie Mellon

http://oli.cmu.edu/courses/free-open/statistics-course-details/

 

Udacity

https://www.udacity.com/course/statistics--st095

 

Duke University

https://www.openintro.org/stat/

 

Udacity – Nano Degree

https://www.udacity.com/course/machine-learning-engineer-nanodegree--nd009?utm_source=quora&utm_medium=ads&utm_campaign=quora-ads-machine-learning-7-desktop

 

Very good blog of nat silver

https://fivethirtyeight.com/

 

 

Stat Trek –online

http://stattrek.com/

 

Udacity – Intro to statistics

https://www.udacity.com/course/intro-to-statistics--st101

 

 

Hartford - professor

http://www.hartford.edu/barney/about-us/faculty-staff/faculty-pages/braithwaite.aspx

 

 

 

Rajkumar Ethirajulu – good profile to learn and benchmark myself

 

https://in.linkedin.com/in/rajkumarethirajulubigdata

Agile Methodology

1. Plan

The Plan phase moves a project from an approved, prioritized High-Level Analysis to the point where it is ready to begin analyzing and defining requirements for the solution. 

Execute Entry Criteria

Requirement
Accountable Role
□ Approved, Six Questions
Idea Owner
Plan Activities
Activity
Accountable Role
□ Request Clarity project number
IT Delivery Manager
Description
The delivery manager requests the project to be setup in Clarity. The Clarity Team creates a project number in Clarity.
Templates
Clarity Project Request Form
□ Assign a Project Manager
IT Delivery Manager
Description
The delivery manager works with the resource managers to assign a Project Manager to the project.

2. Iterative Build
The Iterative Build phase uses the plan to develop the software or technology through iterations. At each iteration, the team can use the knowledge they gained during previous iterations to make modifications and add functions more efficiently.

Iterative Build Activities
Activity
Accountable Role
□ Complete story card design
UX Lead
Description
In conjunction with detailed requirements, design and test the user interface for each story card; must be completed 1 to 2 iterations prior to the scheduled build.
User Experience
DEtailed Hardware/Software Specification
Sub-Activities
□ Complete user analysis
□ Design solution for each story card
□ Create wireframes
□ Create prototypes
□ Conduct usability tests
□ Document design specifications
□ Update and store inventory in the project’s SharePoint site
Contributing Roles
UX Lead, BA Lead, Project Sponsor, Project Manager
BA Lead, UX Lead, System Test Lead
UX Lead, BA Lead, Tech Lead, Project Sponsor
UX Lead, BA Lead, Tech Lead
UX Lead, BA Lead
UX Lead
BA Lead, Project Manager
□ Begin Transition planning
Project Manager
 


3. Test & Production Readiness
The Test phase takes a working software or technology and tests it for functionality, system, performance, and user acceptance to ensure the deliverable meets quality standards. 

Test Activities
Activity
Accountable Role
□ Test software or technology solution
System Test Lead, Performance Test Lead
Description
Test the software and technology, including system, performance, and user acceptance testing, to ensure the software or technology satisfies business goals and objectives.
Service Level Agreement
Sub-Activities
□ Move and configure code through environments
□ Complete system testing
□ Complete performance testing (pass PEGT)
□ Verify authentication/authorization models
□ (If applicable) Complete security assessment test
□ (If applicable) Validate monitoring and detection capabilities
□ Complete user acceptance testing
□ Review and approve user acceptance test results
□ Validate SLAs
□ Review and approve SLAs
Contributing Roles
Software Configuration Manager, Developers
System Test Lead
Performance Test Lead, Performance & Capacity
System Test Lead, Info Security Lead
System Test Lead, Info Security Lead
Info Security Lead
System Test Lead, BA Lead
System Test Lead, BA Lead, OCM Lead
Performance Test Lead, Performance & Capacity
Project Manager, Project Sponsor, Solution Architect Lead, Tech Lead, Operational Lead, BA Lead, System Test Lead

4. Transition
The Transition phase includes the activities necessary to deliver working software or technology as an enabler to the customer. The project leadership team is accountable for delivering software or technology that meets the business case’s requirements and success measures. Documentation for the closure decision is minimal and covers only the requirements to complete the transition.

Transition Activities
Activity
Accountable Role
□ Complete training and communications
Project Manager
Description
Conduct training and send internal and external communications to prepare the customers for the new software or technology.
 
Sub-Activities
□ Execute Organizational Change Management Plan
□ Execute Communication Plan
□ Review authentication/authorization models
□ Review software/technology with production support staff
□ Execute Training Plan
Contributing Roles
OCM Lead
OCM Lead, Project Manager
BA Lead, Info Security Lead, Project Sponsor
BA Lead, Project Manager
BA Lead, Project Manager
□ Deploy software or technology
Release Manager
Description
Move the software or technology into production for customer to begin using.
Detailed Hardware/Software Inventory