Tuesday 23 August 2016

The Gartner Hype Curve 2016






The latest iteration of the Gartner Hype Curve has been released for 2016. The hype cycle is a branded graphical presentation developed and used by American information technology research and advisory firm Gartner for representing the maturity, adoption and social application of specific technologies. The hype cycle provides a graphical and conceptual presentation of the maturity emerging technologies through five phases.

The latest Gartner Hype Cycle for Emerging Technologies illustrates how quickly technology innovations have the potential to redefine buyer, supplier and customer relationships for any business. Gartner added 16 new technologies to the Hype Cycle this year, including block chain, machine learning, general purpose machine intelligence, smart workspace in addition to many others.

The yearly update to the diagram often presents a reminder to the tech world how focused and consumed the sector is why hype. Many technologies are quoted as life changing years or a decade before they actually are usable and in the mainstream. An example are drones. Amazon are creating the idea that they can do drone deliveries but the curve shows drones as not even at the Peak of Inflated Expectations. Drones won’t be delivering your Amazon order any time soon.




There are some key information within the 2016 diagram:
  • Sixteen new technologies included for the first time this year. 4D Printing, Blockchain, General-Purpose Machine Intelligence, 802.11ax, Context Brokering, Neuromorphic Hardware, Data Broker PaaS (dbrPaaS), Personal Analytics, Smart Workspace, Smart Data Discovery, Commercial UAVs (Drones), Connected Home, Machine Learning, Nanotube Electronics, Software-Defined Anything (SDx), and Enterprise Taxonomy and Ontology Management.

The most interesting addition is the Enterprise Taxonomy and Ontology Management – essentially the categorisation of things. With content search being the issue it is when dealing with big data this potentially has the most B2B impact.

  • Immersive experiences will become more intelligent and contextually aware, enabling greater productivity and Technologies enabling transparently immersive experiences include 4D Printing, Brain-Computer Interface, Human Augmentation, Volumetric Displays, Affective Computing, Connected Home, Nanotube Electronics, Augmented Reality, Virtual Reality and Gesture Control Devices.

There is a lot of media attention around the AR scene with the success of Pokemon GO and the new VR headsets such as the Vibe and Oculus Rift. The odd one on the list is Affective Computing – encompassing speech and facial recognition (old tech right?) but now adding emotion into the mix.

  • Smart machine technologies will transform manufacturing and its related industries: Smart Dust, Machine Learning, Virtual Personal Assistants, Cognitive Expert Advisors, Smart Data Discovery, Smart Workspace, Conversational User Interfaces, Smart Robots, Commercial UAVs (Drones), Autonomous Vehicles, Natural-Language Question Answering, Personal Analytics, Enterprise Taxonomy and Ontology Management, Data Broker PaaS (dbrPaaS), and Context Brokering.

Driverless cars and drone deliveries are in the news already – although both are suffering to grab the mainstream. Robotics is probably the next transformational technology group to change human existence, the last being the internet. Watch out for the term SmartDust. As a collective term for many new and emerging technologies it hasn’t really had its moment in the sunshine, but it’s only a matter of time.

  • Emerging technologies are enabling entirely new business models, driving a Platform Revolution. Platform-enabling technologies making new business models possible include Neuromorphic Hardware, Quantum Computing, Blockchain, IoT Platform, Software-Defined Security and Software-Defined Anything (SDx).

The Blockchain technology is undoubtedly about to change the financial world. The hyper-secure method of managing and tracking financial transactions should go a long way toward negating financial fraud within the banking and public sector.

  • Fourteen technologies were removed from previous years’ diagrams: Hybrid Cloud Computing, Consumer 3D Printing, and Enterprise 3D Printing, Bioprinting Systems for Organ Transplant, Advanced Analytics with Self-Service Delivery, Bioacoustic Sensing, Citizen Data Science, Digital Dexterity, Digital Security, Internet of Things, Neurobusiness, People-Literate Technology, Speech-to-Speech Translation and Wearables.

Many of these removals are a reflection of some expected trends and technologies just not emerging or becoming consolidated within other tech terms and categories. Wearables is the interesting one, now not seen as new, exciting or having the kind of impact it was imagined a few years ago.


The conclusion we can draw is clear. The tech industry will continue to thrive on hype and the promise of change on the largest magnitude. The man in the street will continue to struggle and identify those technologies that really do have the power to improve their lives. Investors will continue to chase the unicorn start-ups – hoping that it will survive the full Gartner curve and become big exit for them. Next year’s Gartner diagram will arrive and the whole technology debate will begin another cycle.

Friday 19 August 2016

Net Neutrality - do you care?








Summary


Net neutrality is a term many are not familiar with and those that claim they are can struggle to explain it if asked. The most common question then becomes "How relevant it is to you and what you do online?"

Those that propose the notion would argue 
that the debate about how networks operate is fundamentally one about the future of the internet for everyone around the world.

To put the issue into easier terms, lets consider trains and the difference between standard and first class.  First-class carriages on a crowded commuter train represent special treatment for those that can pay - standard ticket passengers are crammed against each other's armpits because of their unwillingness or inability to pay. Herein we have a good example of net neutrality.

On the net neutrality train, all passengers (ie data) would be treated equally, with no special carriages for those able to pay.

The principle that all traffic on the internet should be treated the same relates to the emergence of the internet and for many encapsulates the whole principle of an open for all internet, free from government governance or corporate control.

So what are the two sides of this argument?


Those in favor
: Net neutralists argue that any public internet service provided by Internet Service Providers ("ISPs"), providing the pipes for content,  should only run the networks and have no say over how and what content flows to users, as long as it is legal.

Those against: But ISPs argue that if the internet could run on multi-layered basis with some channels slower but free and others are faster and paid for - the revenue it would generate would benefit everyone and raise minimum standards. Providers prepared to pay can go in an internet "fast lane" - is inevitable in today's data-hungry net world is the common argument.

USA versus Europe versus South America


The current debate was really kick-started by a ruling in a case in the US in January 2014, ISP Verizon successfully challenged the US Federal Communications Commission ("FCC") over its net neutrality policy - known formally as the Open Internet rules.

The Court of Appeals negated two of the three open internet rules, opening the way for ISPs to start charging fees to carry bandwidth-hungry data on its networks. In March of 2014, Netflix agreed to pay a fee to Comcast to improve the speed at which its service reached consumers' homes.

The part of the rule change that has sent the industry into uproar is a proposal for so-called fast lanes, allowing ISPs to charge content providers as long as the terms were "commercially reasonable".


Fundamentally, Europe doesn't agree with the US.


In April 2014, the European Parliament ("EU") voted to restrict ISPs from charging services for faster network access.  It also judged that mobile and broadband network providers should not be able to block services that competed with their own.

Slovenia and the Netherlands have already incorporated the principle in their national law.

Brazil ,by way of comparison, has a new law that telecom companies cannot change prices based on the amount of content accessed by users. It also states that ISPs cannot interfere with how consumers use the internet.  Meanwhile, neighboring Chile was the first country to pass net neutrality legislation, back in 2010.



Why should you care?


Depending on the outcome of all this wrangling, it could either hit your wallet or change your watching habits.

If net neutrality is upheld, ISPs could pass on the cost of delivering bandwidth-hungry; upping the cost of services to pay for delivering faster bandwidth - and raise the monthly fee they charge for internet access.  Users may get a charge that reflects their usage, with those using video-on-demand services being charged more.

On the other hand, if ISPs are able to start charging fees for prioritised access to content then users may find that those websites not in the fast lane are slower to load.  There are fears that ISPs might block access to rival services or slow them down so much as to be unusable.

Consumers could also be charged more by the content providers forced to pay more to get their services to them in quality.

Anyone thinking this is a US-only issue should note that, following its agreement to pay a fee to Comcast and Verizon, Netflix put up the price for its monthly streaming service in Europe as well as America.


What can you do?

Like with most causes there is a petition. You can join the fight against the corporates by signing it here.

If you would like to understand more then check out these current links:


Tuesday 16 August 2016

Brexit and Data Protection – What Happens Now?




The unexpected decision for the UK to Brexit the EU recently will require all businesses to adjust their approach to Data Protection.

The fundamental issue for most business will become how data protection will be governed in the UK going forward and over the next five to ten years.

Prior to the Brexit vote, most of European Businesses were gearing up to adopt the EU Data Protection Act (DPA) that is scheduled to replace the existing legislation on the 25th of May 2018. This was designed as an overhaul of the existing laws that have become weaker than the regulators would like. The new laws introduce much stronger data governance and management requirements and lead to clearer data protection responsibilities, more opportunities for personal claims for damage following a data loss, more stringent requirements for organisations, such as IT cloud providers, tighter rules on transferring data outside the EU and greater penalties for data breaches.

The UK Brexit vote and the unknown timescales and process (which could take years) will have significant implications for UK businesses that use data on a day to day basis.

There has to be the prospect that the UK does not adopt the DPA along with the rest of Europe. This could cause significant issues for businesses that need to transfer data between the UK and EU destinations. Red tape looms along with higher costs and lower operational efficiency.

These issues and changes may leave the market with a difference of opinion over the commercial viability of working with UK companies operating outside of the DPA than those within the EU operating within it. That could form the basis of a competitive advantage for EU based companies.

What is important is that companies, especially those transferring data across jurisdictions, consider how the data protection legislation will affect them. Data governance and security is a part of a business strategy and risk mitigation is a crucial aspect to protecting your business.


Thursday 11 August 2016

Technical Due Diligence - The Basics






Opening Summary


If you’re staring down the barrel of a technical due diligence investigation for the first time, the prospect can be daunting. The VC has brought in a specialist to dig down to the core of your business and you’re quite rightly feeling vulnerable and nervous. The following article is aimed at helping you understand what is happening and how to deal with it calmly and objectively.


Technical Due Diligence


From the beginning there is a temptation to treat the assessor as hostile. It’s quite often the case that the assessor is seen as being someone who can sink the deal. It’s simply not the case. The assessor is not looking for reasons to kill the deal, they are looking for reasons to say yes to the investor.

It’s common for the assessor to have been in the entrepreneur’s position and therefore understand the level of anxiety that the investigation will be generating around the company. That understanding and the first-hand experience of the challenges and realities of running a start-up will ensure both empathy and realistic point of view that nothing will be perfect under scrutiny.

Ultimately the assessor works for the VC and must report honestly and accurately about any deficiencies or concerns made visible during the investigation. Not to do so would put the assessor at risk of a lawsuit for being negligent in the event the deal is completed and something that should have been highlighted wasn’t and subsequently causes a loss to the investor.

Whereas being able to answer questions in detail about your business and how it works is critical, there is also a secondary set of factors about how you engage with those questions, think quickly and how well you deal with the overall process. These give insight into the human behind the business opposed to just seeing the business as a series of process based interactions.

The primary purpose of the assessment is to evaluate four key aspects of the business from a technical standpoint. Vision, Scalability, Maintainability and Continuity. 

vision



The company was (probably) founded by someone with a vision. A future for the tech and for the company. The first question then should be does that vision still exist, how is it described, is it road-mapped and what does that future look like.

  • Sometimes the ownership of the technical vision has been transferred and the responsibility is with someone other than the founder.
  • Does the company’s management share the same vision of the where the tech is taking them?
  • Does the vision make commercial sense and can it be correlated with other parts of the tech spectrum?

The quality of the leadership within the company is critical. Every successful tech company has someone at the head of the organisation with a real vision for the technology. The assessment needs to show and understand how well (or not) that leadership is working.

Diversity is another key indicator. If the entire leadership of the company comes from essentially the same personality types, then the company will become one dimensional. Business leaders, technology types and others must combine to give a complete outlook on delivering the companies vision.

Sample questions around the company’s technical vision might include:
  • Do you have a clear road map for where you want the product to be in one month, six months?
  • What is your recruitment ratio for technical staff? How many interviews does it take to find a new recruit?
  • What is the employee turnover for the company on a yearly basis and what factors do you attribute that turnover to?
  • Do you look to recruit people you know from previous jobs? If not, why not?
  • Are you using Cohort Analysis or some other methodology for user feedback and testing?
  • What is the product tempo? How many releases or update every month/quarter/ year?
  • What are the number 1, 2 and 3 concerns over the current product or service?
  • What is the recruitment process in detail (How many stages? Code tests?)


scale


This would seem obvious at first glance but getting a tech business to scale effectively can be more difficult than just adding more servers or connectivity. There is some science, some methodology and even some philosophy that needs to come together to make a business scale in a frictionless way.

Sample questions on scaling might include:

  • Would/Could you scale/change to accommodate x10, x100, x1000 times more users?
  • Would/Could would you have to scale/change to accommodate a million users?
  • What aspects do you monitor and at what level of detail?
  • What metrics would tell you that you are not scaled appropriately?
  • What aspects of the system do suspect might not scale well?
  • What is your hosting solution, how is it backed up and what is the security approach?
  • Do you use any third party services, and what is your redundancy against 3rd party failures?
  • Can you show me a network diagram?
  • What single points of failure exist?
  • What are your top 3 concerns around the current systems?


maintain


There are a number of concerns that an investor will have over any tech acquisition when it comes to the existing tech. 

  • Will the existing staff depart once the company has changed hands, meaning that all the experience is leaving the business?
  • Will new staff joining the company just find a mess of spaghetti code that takes months or even years to understand?
  • Has any documentation been created in a meaningful way that will support the continued development process?

One of the common issues is that these questions are answered differently by the business types running the company versus the technical types involved in the tech creation process. The assessment must clarify the issues around the existing code base and what will happen when the team attempts to write new code by new team members.

These sample questions seem like common sense but they still do form the basis of an investigators basic train of thought:

  • Do you use any form of source/version control and if so, which one and why?
  • Do you build/check-in on a formal timetable - daily, weekly, whenever?
  • Do you insist comments on check-in?
  • Do you use Continuous Integration or create unit tests?
  • Do you have formal code reviews?
  • What development methodology do you use and why?
  • Can you deploy a build to staging or production with a single click?
  • Do you have dedicated and qualified testers?
  • When do you deploy?
  • Does the software automatically notify you of errors?
  • Do you have a bug tracking system?
  • Could you walk me through some code if required?
  • How many defects did you close last month?
  • How many open defects are there?

continuity


Continuity is essentially about disaster recovery, back up plans and contingencies. If we assume that everything runs as we would like and does so forever, then we’re setting ourselves up for a fall.
There are digital failures to assess. What happens if your data centre goes off line, if a server fails or if connectivity is lost?
There are physical disasters to understand. What happens if the building floods, the power to the building fails or the building burns down overnight?


Disaster recovery planning within most tech businesses is weak and is frequently cited as a concern in technical due diligence reports.

Typical questions around continuity are:

  • Do you have a documented and tested Disaster Recovery plan?
  • Do you have a documented and tested Business Continuity Plan?
  • Is there any part of the product or system that is understood by only a single person?
  • Is your Version Control system backed up, where, how often?
  • If the Database Server exploded how much client data would be unrecoverable?
  • If there was a fire at the Data Centre how much client data would be unrecoverable?
  • Do your staff have laptops and if so how is data and applications secured on them?


Closing Summary


An experienced assessor will start each new assessment with a framework of points or questions and then use their experience to start to focus down on key areas dependent on what responses they are getting. The best ones will be able to think on their feet and drive the investigation on a case by case basis.

Hopefully this article will give you a starting point to think about how you can prepare for the due diligence process before being face to face with an assessor and feeling under pressure.

If you have specific questions or would like to speak to one of the team directly then please get in contact.



Email: dave.sharp@technicalduedil.com
Phone: 0845 862 0726

9 Top Programming Languages







There are many different programming languages that a new coder can learn. Here is a review of the top 9 according to the needs of commercial software developers. The list is in no particular order of importance, but merely serves a reference list.

1.    SQL (pronounced ‘sequel’)

SQL nearly always tops the list, it has become ubiquitous and can be found in various guises. As a suite of database technologies (MySQL, PostgreSQL and Microsoft SQL), it can be used to power servers for big businesses, small businesses, hospitals, banks and academic establishments.  Virtually every online service, web application or mobile app eventually touches something SQL.  Mobile devices and tablets have access to a SQL database called SQLite and many mobile apps and web based services use it as a direct technology component.

2.    Java

Java is now one of the most persistent technologies in the digital era, having been launched nearly 20 years ago, its longevity is unprecedented.  Due to its long life, java is one of the most widely adopted programming languages, used by some 9 million developers and running on 7 billion devices worldwide. When Android hit the market as a technology, it’s programming language of choice was Java, and is used for all native Android apps. The language is regarded as simple and readable by most developers and that has been its core strength. Its highly compatible with older systems and this ensures that code requires less maintenance and upkeep over long periods. A lot of high profile online applications are built in Java, such as LinkedIn and Amazon.


3.    JavaScript

JavaScript is a scripting language (not compiled) used on web pages to inject interactivity. This should not be confused with java which is a compiled programming language. JavaScript is used to create many of the web page effects we have become used to seeing, pop-up boxes, web forms and even simple video games.  JavaScript is now a fundamental part of how the internet works and is embedded into web browsers. The language has developed more complex functions to support things like real-time communication through technologies like Node.js and has become a framework for front-end development through Angular.js

4.    C# (pronounced C-sharp)

C# (released in 2000) is the latest in a long line of Microsoft developed programming languages. These languages fit into the well-used .NET framework. It is a multi-paradigm programming language encompassing strong typing, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines.  C# is an evolution of the C and C++ languages that have been in mainstream use for the past 30 years.

5.    C++ (pronounced C-plus-plus)

C++ is a general-purpose programming language. It has imperative, object-oriented and generic programming features, while also providing facilities for low-level memory manipulation.  It was designed with a bias toward system programming and embedded, resource-constrained and large systems, with performance, efficiency and flexibility of use as its design highlights.

C++ is a general purpose object-oriented programming language based on the earlier ‘C’ language. Developed by Bjarne Stroustrup at Bell Labs, C++ was first released in 1983. Stroustrup keeps an extensive list of applications written in C++. The list includes Adobe and Microsoft applications, MongoDB databases, large portions of Mac OS/X and is the best language to learn for performance-critical applications such as “twitch” game development or audio/video processing.

6.    Python

Python is a general purpose programming language that is simple and incredibly readable since closely resembles the English language. This simplicity to read is Python’s main strength and makes it a great language for beginners. Python has quickly become the language of choice for academia, recently bumping Java as the language of choice in introductory programming courses.  Because of Python’s use in the educational realm, there are a lot of libraries created for Python related to mathematics, physics and natural processing. PBS, NASA and Reddit use Python for their websites.

7.     PHP

PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language. Originally created by Rasmus Lerdorf in 1994, the PHP reference implementation is now produced by The PHP Group.  PHP originally stood for Personal Home Page, but it now stands for the recursive acronym PHP: Hypertext Preprocessor.  PHP tends to be a popular language since its easy-to use by new programmers, but also offers tons of advanced features for more experienced programmers.

8.    Ruby on Rails

Ruby on Rails, or simply Rails, is a web application framework written in Ruby under the MIT License. Rails is a model–view–controller (MVC) framework, providing default structures for a database, a web service, and web pages. It encourages and facilitates the use of web standards such as JSON or XML for data transfer, and HTML, CSS and JavaScript for display and user interfacing.
Ruby on Rails has many positive qualities including rapid development, you don’t need as much code, and there are a wide variety of 3rd party libraries available. It’s used from companies ranging from small start-ups to large enterprises and everything in-between. Hulu, Twitter, Github and Living Social are using Ruby on Rails for at least one of their web applications.


9.    iOS/Swift

Swift is a general-purpose, multi-paradigm, compiled programming language created for iOS, OS X, watchOS, tvOS, and Linux developed by Apple Inc. Swift is designed to work with Apple's Cocoa and Cocoa Touch frameworks and the large body of extant Objective-C (ObjC) code written for Apple products. Swift is intended to be more resilient to erroneous code ("safer") than Objective-C, and more concise. It is built with the LLVM compiler framework included in Xcode 6 and later and, on platforms other than Linux, uses the Objective-C runtime library, which allows C, Objective-C, C++ and Swift code to run within one program.


Developers will find that many parts of Swift are familiar from their experience of developing in C++ and Objective-C. Companies including American Airlines, LinkedIn, and Duolingo have been quick to adopt Swift, and we’ll see this language on the rise in the coming years.

Tuesday 9 August 2016

Investment for Video Game Studios (August 2016)



I'm in Edinburgh about to give a presentation on how small independent games studios can get themselves going and how they can generate some investment or grant funding opportunities. I was kindly asked to do this by the peeps at The Big Learning Company and I am happy to be able to do this.

My presentation is quite short due to time constraints so I thought I would supplement the slides with some further comments on here, mainly around the top problems start-up studios face.


#1 Team

As with many non gaming businesses, its very hard to raise money if your a sole founder. The risk of a one man management team is too great for many investors. Its still quite hard with two people. The optimum number is three. Four is too many and has other issues with it. If you're a sole founder don't be surprised if investors suggest you find a couple of other people, its about risk management.

#2 Product

There is a massive difference between a good game and a commercial one. There are hundreds of good games (good art, solid code, nice story) every year that end up with little or no audience. If you're a start-up, start with an audience in mind and build a game towards them. Avoid the build-it-and-they-will-come thing.


#3 Timing

There are investment cycles that need to gel with the games cycles to get a good fit for both the studio and the investor. New hardware releases and launch titles are a good point to aim for, so waterfall backwards to see when investment needs to be completed and make that part of the narrative of your investment.


#4 Expectations

Many game studio founders fail to fully understand the expectations of professional investors until its too late and the relationship has broken down. Investors expect to make a return, and often founders see the road ahead and timelines differently. Don't enter into an investment agreement without fully understanding what the investor wants and needs and when.


#5 Focus on Delivery

Not everything goes to plan, it would be a naive investor that thought that way. Most investors are perfectly OK with problems if the founders are focused on delivery. What they don't want are long-winded elegant solutions built internally that could have been bought for $500.00 and implemented in hours or days. Always focus on getting to market and your investor will always be ready to help financially.

#6 Governance and Guidance

It can sound a bit pretentious when you talk about boards of directors and non-executive directors ("NXD") but its never too early to start getting the best advice you can. The formation of the board is a good indication to potential investors that you're serious about building a sustainable company. NXD's should be used on a rolling basis to help combat specific issues you're facing. Collectively the board and the NXD's give your company strong governance.


#7  Fail Fast

An oldy but a goody. Plans are useless but the planning process is vital. Make sure you have a plan b, c and probably d. When something isn't working move onto the next version of the plan, but do it fast. Learning that pivoting is not a bad thing is vital and every change needs to be rationalised and documented for the investors.


#8   Plan for Success

Many entrepreneurs know exactly what they are going to do if they come up short on sales targets or revenues. They tend to have a regressive set of steps that make less than expected revenues still feasible. They then utterly fail to plan for success, where revenues exceed their expectations. This sounds like an odd problem to have but getting a business to scale correctly and not lose potential revenue and customers is slightly more of an issue than coming up short on sales.


#9 The Growth Staircase & Transition Planning

In the beginning its about games and being a games studio. Ultimately it becomes about transitioning to being a software company that is currently making a game. They're not the same thing. The transition into having a financial director, human resources, bespoke IT department, sales and marketing staff etc. is not straight-forward but is a symptom of success. Once you reach that point the transition needs to be planned and implemented like anything else. Many games studios fail to make this transition and then struggle.


#10   The Exit


Your investor wants to make a return on his investment. A multiple of x10 in an ideal world, x6 would be nice and x4 is probably something like the average. Understand that this conversation is not one you can avoid and at some point the investor will want a sale (typically 4-5 years). If you wanted to run the company forever then you got in the wrong elevator.








Monday 1 August 2016

7 Measures for Business Cyber Resilience








There are ever increasing threats to business in cyberspace. DDOS, Ransomware and Phishing to name but a few. There are some proactive steps you can take as a business to help mitigate against these threats:

1.    System Hygiene

Everything starts with a proactive and managed approach to keeping computer systems clean and secure. Having software monitoring desktop machines for intrusions, making sure that all routers and firewalls are configured correctly and running the latest operating systems, ensuring that staff do not plug unknown devices into their machines etc.  All of these activities if treated as routine maintenance tasks will stop the basic low level issues from becoming major ones. It’s a small investment in time and money that has a disproportionate effect on keeping your business safe, and like insurance of any type, you’ll be glad you had this approach in the long run.

2.    Planning

Plans are fundamentally useless, as soon as something goes wrong its typical that the incident does not compare with the plan, but, the planning process itself is a vital weapon. If the senior management team understands how to react to a cyber-attack and has a number of documented options available in advance, it can act quickly to stop a problem from escalating. The senior team needs to contemplate all forms of possible attack and create a response for each flavour of incident. Those responses should be made available to the staff and reviewed at regular intervals.  Training key staff members on how to respond to an attack is vital.
3.    Risk Profiling
Not all cyber-attacks are created equal. It’s a positive position to be in if a company can recognise patterns of attack and what may have already happened and what comes next. This allows a far greater capability to create a bespoke defence to different problems and know when to act and where to look. Different company digital assets may require vastly different approaches to keeping them secure, most cyber-attacks will not be beaten by a one-size-fits-all approach. Create different risk profiles for different attacks and have a fit for purpose response.
4.    Metrics
During a cyber-attack its most unlikely that you’re going to have the option to work in high levels of detail. Its more fundamental that you act quickly than act precisely. Focus on being able to be agile with your responses using rough figures and estimates rather than precise numbers. It means that your attacker is forced to do the same making the likelihood that the attack will stop and it avoids your response grinding to a halt because of analysis paralysis. Run simulations, record numbers and create ranges that you can recognise and define what response is appropriate.

 5.    Risk Mitigation

Your company needs to spend time and money to mitigate the risk of a cyber-attack. Some of these seem common sense and yet a lot of companies still fail to ensure these are in place:
Training: Make sure all your staff understand their role in cyber security and actively engage with them in discussions around how the company’s protective stance can be enhanced.
Certification & Compliance: Even if your company is not software or tech focused, make sure that you go through the ISO9001 and ISO27001 certification. Stick to the rules and regularly retest yourself. These standards are there to help you defend your company and its information security.
Policy & Procedure: Write specific processes and policies for the company to use that enable new habits within the staff to form. Bring Your Own Device policies, rules on portable hard drives, policies on accessing external systems and physical security mantras will all help mitigate risks.
6.    Cyber Insurance
In the modern era it would be remiss for companies that hold personal information or sensitive data to not have cyber insurance. These policies cover the loss of data or information from IT systems or networks. The average cost of a cyber-security breach is £600k - £1.15Million so typically carrying £2.5Million of cover seems a minimum policy amount. There is some good guidance on cyber-insurance cover available from the Association of British Insurers here.
7.  Go!

Press the go button and put everything into place. It’s often that plans around cyber-security are left unimplemented because of the “it can’t happen to us” syndrome. If you’ve gone to the extent of the planning, then the implementation should be easy and straight forward. Don’t be the victim of a cyber-attack for the sake taking the last steps of implementing your cyber-security strategy!

Saturday 30 July 2016

The 7 Deadly Sins of Software Development






Here is a rundown of the main issues with a suggestion or two on how to limit their impact:

1. Poor Technology Choices


From the outside this can often seem an odd statement. How do you know you’re making the right choice when there are so many to choose from? Many software projects are created in the wrong technology set and either just put up with it or pivot into a different tech mid project – making all previous work largely redundant. The correlating effect of making the wrong choice in the early stages can be huge.

All development teams have a preferred technology to work with. The question should be what is best for the project, not what is best for the team. If the project needs don’t match with the team then they are the wrong team, it’s not about getting the team to change to adopt something for the sake of working on the project, that has some inevitable conclusions when the team can’t complete the project on time, on spec or at all. The developer must always do the right thing for the client and if that means not working on a project because of a tech mismatch then that is the correct decision.

New generations of software developers will come in with new languages and approaches. New = risk. If the project is safety critical or has a regulated nature then Ada – despite being 25 years old – is the right choice, not the latest fashionable language that’s breaking ground. The mobile space is a lot more fluid and fast pace in terms of change, but should be seen in the same light, sometimes the last gen languages are still the right choice over the next gen.

2. Agile is the answer to everything – NOT!


Development methodologies come in all shapes and sizes but ultimately no matter what you think of them, you need one. They’re not a magic bullet that will solve all your problems but they do limit the size and shape of issues.

Waterfall – the established wisdom for many companies still remains a viable option. Agile, the young pretender to Waterfall’s crown has become a middle management battle cry, despite many using it not really understanding the concept never mind the associated problems with it.

Under scrutiny Agile still makes most sense when used as part of cloud based application development. The need to rapidly and frequently update applications makes this a sensible approach. Waterfall favours slower release cycles on products that have a larger qualitative code requirement (FinTech, MedTech, Banking, Safety Critical)
Any choice you make is not going to remove the productivity issues that most software teams face but it will keep the development cycle within a monitorable and measurable environment that management can understand and assess.

3. Inadequate and Under-Resourced Testing!


If your development team is running short sprints, then the testing and deployment process is pivotal to having an efficient workflow. The testing should ensure product quality and avoid broken code making it to the live environment but it must be uber-efficient to not slow or disrupt the successful completion of the sprint.

Automated testing processes, commonly referred to as Continuous Integration (“CI”) is the best solution for allowing the team to test rapidly and deploy when code successfully passes. Designing and implementing CI into the work flow is a specialist task and is expensive at the outset, sometimes making it hard for smaller development teams to justify the cost.

The return on the investment into CI should be huge. Development teams should be able to make quick updates and changes and deploy them into the live environment with the surety that the code has high integrity. What this does mean is that developers need to take a higher level of personal responsibility to create and maintain the automated tests – even if the team includes a dedicated DevOps developer. CI does not remove the need for test engineers but it should allow a smaller number of people to cover a larger code base more efficiently.

4. No Long Term Product Road-map


Many businesses with a software component often start off with the notion that the software has a path and can be added to, amended and expanded. This intention often gets lost, forgotten about or is the first thing to be cut when times are tight financially. Software dates like everything else and all the company is deferring the cost to a later date, it doesn’t go away.

As the company grows and the user base within the software increases then it’s often a case that strategies at the crazy end of the spectrum are opted for. Every customer having their own version of the software is a common outcome of under-investment at the right times, making maintenance and updates insanely complex and time consuming.

A 12 to 24 month product road-map that is well thought out and funded at the right level will always ensure that both the development team and the customers are aware of the point in time when new features and updates are coming on-stream and can therefore plan accordingly. Anything other than this means the company and the customer are acting tactically not strategically – firefighting today’s issue rather than planning delivery further ahead.

5. Weak Project Management / No Product Owner


Getting a software project to market relies on a symbiotic relationship between the product owner and the project manager.

The product owner must understand the customer and their requirements. The project manager must understand both the development team in terms of skills and the technology in terms of capability. The two must then come together to produce a rational and reasonable set of tasks and timescales that allow the product take shape with the right functionality and in the right timescales.

The common mistakes include the Product Owner attempting to be the project manager and attempting to guide the team without having the required understanding of the issues. The other common mistake is no product owner at all, the Project Manager is then left to try and create software that has the user’s best interests embedded into the product without really knowing who the customer is in the first place.

6. Teams split across multiple locations


This is the great white elephant of software development. Outsourcing became a thing for software development in the late 1990’s and despite the huge increase in productivity, work flow and communication tools it still remains a major risk to any software development project.

In an Agile project management approach the daily stand-up is complicated by multiple locations and time zones and therefore loses its basis purpose. In many instances testing and development are not in the same location and that can increase the time it takes to examine and identify issues ahead of agreeing the fix – at which point this goes around the circle again.

Creating different projects in different offices and managing them centrally is less of an issue. Developing the same project in multiple locations and expecting an efficient delivery is unlikely.

7. Environmental Costs


There is a lot written about what the best environment for software development teams to exist in, its very subjective and circumstantial. It’s often written that open plan offices are very disruptive and that developers do better productivity wise when they are in small rooms with just 2-3 people in each. Whilst the concentration levels might be more consistent you also need to consider the company culture and the wellbeing of the staff.

Like most things in life it’s about balance. If the staff can work to reasonable standards in an open plan environment I (personally) still think that is better than the cubicle farm approach. Some developers definitely need the calm and quiet and some can work fine in the hustle and bustle of an open plan space. It’s something that needs consultation with the staff and for an open platform for discussion so that staff needing quiet can get it and others that need stimulation and human contact can also thrive.

Don’t under-estimate the effect of a poor work environment if the development team is struggling to deliver consistently or has communication issues.

Monday 9 May 2016

The problem with crowd-funding ...



When Kickstarter launched in 2009 I was sceptical. I'm still sceptical in 2016. Whilst I thought the concept of crowd-funding was an interesting open door for a lot of new projects, I also saw the downside - the potential for a lot of people to take money and not deliver. 

The market for crowd-funding has bloomed and there are now 30+ sites you can browse for projects and products to back. I've put money into two gaming projects (as that's my background) but more to see what the process was rather than I was desperate to back these specific games. Neither of them have made it to market, reinforcing my scepticism.


Projects that fail to find backing

If your project isn't good enough it just won't get traction and therefore wont get funded. There is no mystery here. The public is more educated than ever and project that isn't worthy of attention will not draw the cash. In the early days a slick marketing campaign could help a less than interesting project get its funding, but those days are long gone.


Projects that find funding

These projects/ideas fall into 3 main categories in my mind:

- Marginal
These are projects that just scrape past the post. There was probably a concerted effort by the founders and some pre-awareness, but they just made the target


- Mainstream
These are projects that make their target in less than 60% of the available time and end with 150%-200% of the target. The product is probably a reasonable idea but typically the amount invested in these is on the low side - so the risk level is average.

- Mega
These are well marketed and slickly put together. The bulk of the funding was pre-arranged before the campaign went live and the over-performance in crowdfunding probably means that they could and should have gone through another funding route.

Here are my top-6 things to consider:


#1 Founders

One of the bigger issues is the background of the founders of the projects seeking funding. There is no way that the crowd funding sites can sanity check the people behind the projects so it really needs to come down to the individual to take a closer look. Its the case that some of the people looking for crowd funding don't have the background and track record that would make funders hand over cash outside of the crowd funding sites. Its one of the areas where the sites could do a lot more/better to protect the users. The projects I have backed - I knew the founders and was happy to fund their projects - but this is not typically the case.


#2 Tracking

Lets assume that you've backed a project that is interesting and that the founders are capable of delivering the idea. Project management of the delivery is often completely anonymous. Getting the semi-regular update emails is OK, but if like me you have a technical background, you want a bit more insight into the delivery, the problems, the solutions etc. Even if you're not technical then a more granular insight on the delivery schedule would be more inclusive.

#3 Trust

Thinking about the above, if you don't know the founders and you can't get a good oversight on how the project is progressing then how can you build trust? You cant, plain and simple. So without transparency and trust between the funders and the founders then why are we surprised that there is a disquiet with many crowd funded projects?

#4 Realism

Do your homework. Make some enquiries into how the project should be put together and then judge whether the founders have a realistic time frame or funding amount to turn the project into reality. Even if your understanding is basic it should be enough to broadly assess the projects credentials. If it doesn't look realistic, it probably isn't.

#5 Copyright

So many crowd funded projects run into copyright issues. They unknowingly infringe on someone else's IP and end up getting bogged down with due process. Google around and see just how many other products are similar before committing. The more you can find the more wary you should be. Its part of the taking-your-time and do-your-homework approach you need before crowd funding.

#6 Compliance & Accounting

There are different schools on thought over if crowd funded amounts are Capital or Revenue. Look at the location that the founders state as the home of the project and then look at the rules relating to that location. If the project is likely to loose funds through an overly aggressive tax regime then its also likely to run out of money, or that's what history would suggest.

Closing Summary

In the end its a case of buyer beware. If you just don't take the time to go through some simple due diligence checking then every now and then your not going to get anything for your money. It's not hard these days to get some baseline information that can  give you a better sense of who is involved and what the potential issues might be in delivery - you just need to spend the time.