(You'll find more on this subject - the "presentation" mentioned below - on EuroSTAR blog in a few weeks)
Testing best in practice requires understanding the history of testing and of software engineering. If you do not, you will not be able to recognize recurring patterns, you will be led astray by what looks new and revolutionary but is not. Knowing the history of testing, you are able to see the mechanism of its development, its evolutionary logic, and be better able to see future trends. There can be no real context-driven testing unless you can see the whole context - including the historical context!
My presentation will describe both the history of testing as well as attempt to identify the mechanism of its evolution, and use it to predict future trends.
1. Ancient times 1940-1955. The era of geniuses. Software developers were exceptionally gifted people, many of them Nobel-prize-level scientists. This artisan period worked because of the exceptional skills of these developers, small-scale software and low business pressure. Testing was very much embedded in the development process in a natural way.
2. Medieval times 1955-1968. The era of emerging engineering. The volume of software produced could no longer be done by geniuses only - more ordinary programmers appeared. Software development was no longer programming only - requirements engineering, design and architecture, testing, integration and maintenance became more and more separate disciplines. IT was no longer a natural part of computer science, it became more and more interdisciplinary. Unless rules and processes were followed, IT projects failed.
3. Modern times 1968 - 1990. The size and importance of software projects grew, the failure rate increased, the need for organization, processes and specialization became obvious. Trial and error as a method of coping with uncertainty and complexity was gradually abandoned and predictive, systematic, bureaucratic ways preferred. Software testing became more and more accepted as separate discipline, formal standards appeared and were followed. Testing schools: analytical, standard-based and quality-oriented dominated.
4. The Great Schism 1990-2005. The context-driven school appears and gains ground, foundations of agile framework (as XP) are laid, but at the same time very formal ISEB / ISTQB approach starts to dominate the world.
5. 2005 - today. Modern times. The schism increases - exploratory and agile approaches become to some extent extreme and almost religious rather than rational, while ISTQB strangles its opposition, becomes political and joins the PMI/IEEE/PRINCE2/ITIL super-standards league. At the same time, common-sense, middle way approaches appear, like post-agilism... but nothing similar is yet visible for testing. Or is it? This presentation shows possible beginnings visible today and tries to predict future test trends from them.
Finally, some remarks on the rules that drive this development will be made: on social science and cultural anthropology: how new schools and new memes appear and change, and evolve.
piątek, 30 maja 2014
niedziela, 18 maja 2014
This entry will be available a couple of weeks after it has been published in the June 2014 issue of Professional Tester.
środa, 14 maja 2014
Requirements on a shoestring: how to cope in industry projects with poor requirements engineering awareness
Welcome to Karslkrona and my tutorial :)
T04 - Requirements on a shoestring: how to cope in industry projects with poor requirements engineering awareness
Studying the history of IT industry breakthroughs since 1950-s, you do not find the term "requirements engineering" much. Yet, for all practical purposes, it was exactly brilliant handling of the requirements that characterized all its success stories.
At RE'14 we hear about great advances and superb accomplishments in requirements engineering, but in IT industry, the situation is regularly much worse. To improve processes and projects fast, you do not have time to revamp them radically, introducing full-scale requirements engineering, but you have to go through channels already available, where requirements engineering can be introduced, though under the guise if other names and procedures. This tutorial will help the Participants to learn how to do this, through the following gateways:
- Project management practices: PMI (PMBOK), IPMA and PRINCE 2 allow much RE improvement.
- Agile: the agile framework is very requirements-centred, albeit using exotic terminology.
- Programming and design: RE improvement through DDD, BDD and FDD.
- Business analysis - if its scope is sufficiently broadened, it may be an effective gateway for comprehensive RE practices.
- Test analysis and design are excellent ways of filling in where insufficient RE has left off. We will look at agile ATDD and TDD from this perspective.
- Exploratory testing: this exceedingly popular ideology provides ample room for performing some good, though late, requirements elicitation and analysis.
- Market research: in many respects, it essentially is the same as requirements elicitation.
DateMonday, August 25, 2014 (Full-day)
I intend to tackle another, but equally if not more important question: how can the required product quality be assured as effectively and as efficiently as possible? In other words, how much quality assurance is just enough, but not too much?
On the fly, this presentation will address a number of test-related issues, very crucial, but normally discussed in a chaotic, haphazard way, such as:
- what is the goal of testing
- what are the relative benefits of unit/developer testing
- is exploratory testing better than non-exploratory testing
- should testers be vacuum-cleaners?
On the other end of the scale, there are fast and risky methods: coding early without wasting too much time for analysis and design, testing cursorily and keeping your fingers crossed after deployment. Will it work, or will it not? The risk may be too high, especially where failure cost is huge, but there is a lot to win, and risks negligible, if failure consequences are acceptable.
How to find the optimum solution for a given situation, product type and project? The presentation will tackle this issue from four perspectives (see the four images below). For each perspective, I will discuss how various risk levels and risk strategies apply to it.
Perspective 1. What is the optimum level of the relative cost of quality assurance? In other words, how much carefulness is just right?
Perspective 2. What is the optimal share of testing among other quality assurance methods in projects?
Perspective 3. When is it best to test? Boehm’s curve and Ryber’s curve in testing:
Perspective 4. Testing – how much should be thoroughly planned, designed and specified in advance (“scripted”), and how much should you rather test in exploratory style?
wtorek, 13 maja 2014
DEADLINE: 18 May 2014
Hungarian Software Testing Forum (HUSTEF) 2014
Practical Software Testing
29 - 30 October, 2014, Budapest, Hungary
"From the very first minute, all presentations were very interesting to me. This really IS a differece compared to many conferences, where a presentation or two is perhaps interesting, and then a thought appears "oh my, what is this guy talking about". Not at HUSTEF!" :)
poniedziałek, 12 maja 2014
RE'14 (22nd IEEE International Requirements Engineering Conference), 25-29 August, Karlskrona, Sweden
My one day tutorial (work title: "How Do They Do Requirements Engineering Where They Do Not Do Any Requirements Engineering?") has been accepted to this conference. See you in Karlskrona!