me
scaling at runtime
- more instances of one server
- scale every service independent
- automated scaling
scaling at development time
hugh monolith
- high number of developers
- multiple teams
- no independent deployment
- complex coordination
- high coordination overhead
- no code ownership (as team)
- no domain ownership
# participants |
# connections |
2 |
1 |
3 |
3 |
7 |
21 |
10 |
45 |
20 |
190 |
50 |
1225 |
100 |
4950 |
f(n) = n*(n-1)/2
scaling at development time
smaller systems
- need smaller number of developers (each)
- autonomous development
- less communication overhead
- faster feedback
- code ownership as team
- domain experts
consequences for testing
decoupled systems
- independent development
- independent test
- independent deployment
testing challenges
decoupling
- many test objects
- interface as test entry point
- interface as point of failure
- communication layer
- error handling
testing challenges
messaging
testing challenges
independent deployments
- version mismatch
- changing partner systems
- keep independence and autonomy
Example: UserService
- renaming of an field on interface level
- ide refactored tests in same code base as well
- build is green
- deployment in production
- consuming service (checkout) had't changed
- !!! FAIL !!!
unit test
- isolated tests for business logic
- as little mocking as possible
- too many mocks can make systems "sticky"
solves example problem
unit test
rating
unit test
- trustworthy
- cheep
- very fast
- reliable
- targeted
comment
unit test
test driven development
- training required
- motivation
pair programming
- challenges
- fun
- many benefits (tests, code quality)
- training required
- not every pair fit together
- exhausting
small integration tests
- run within build environment
- with(out) IoC container
- with(out) (in memory) database
- with(out) renderer
- isolated ui
small integration tests
tools
solves example problem
small integration tests
small integration tests
rating
- trustworthy
- cheep
- fast
- reliable
- targeted
Testing systems in isolation
- only one test object
- with infrastructure (databases, message broker, ...)
- simulating communication to all other systems
Testing systems in isolation
tools
solves example problem
Testing system in isolation
rating
Testing system in isolation
- trustworthy
- business logic
- for communication
- cheep
- "fast"
- reliable
- targeted
Testing system pairs
- includes communication between two systems
- knowledge of communication required
- special deployment of test objects
Testing system pairs
(max) number of pairs
# µ-services |
# pairs |
2 |
1 |
3 |
3 |
4 |
6 |
5 |
10 |
7 |
21 |
10 |
45 |
20 |
190 |
50 |
1225 |
Testing system pairs
setup
- communication layer
- every system must have a isolated setup
- create setup for an unowned system
Testing system pairs
tools
Testing system pairs
ownership
- who owns the tests
- creation
- execution
- fixing
solves example problem
Testing system pairs
- YES
- only if exists
- only if not ignored
rating
Testing system pairs
- trustworthy
- cheep
- fast
- reliable
- targeted
conclusion
Testing system pairs
- may solve initial problem
- much to expensive for interface tests
End-to-end testing
- full system deployment
- which versions
- all bleeding edge
- only one changed, other production
- many setups
- two conflicting changes
- braking changes requires multiple systems at one time
End-to-end testing
- coupling
- scaling
- more failure points
- network
- browser
- timings
- provisioning
solves example problem
End-to-end testing
- YES
- only if exists
- only if not ignored
rating
End-to-end testing
- trustworthy
- cheep
- fast
- reliable
- targeted
End-to-end testing
conclusions
- very complex setup
- coordination overhead
- creates bottlenecks
- creates coherence
- wrong layer for most test cases
- most time red
consumer driven contracts
consumer driven contracts
consumer driven contracts
- consumer and provider defines a contract
- define behavior with examples
- clarification on design time
- shared configuration file (owned by consumer)
- generated mock for consumer
- is used to ensure all requirements are specified
- test execution against running provider
- Most relevant Platforms: JVM, .NET, Javascript, Python, Swift, Ruby, Go
workflow: new contract
- consumer: write consumer tests
- consumer: implement consumer
- consumer: make pact available to provider project
- provider: create provider project
- provider: integrate pact:verify for contract in build
- provider: implement provider
- provider: release
- consumer: release
workflow: producer change contract
workflow: consumer extends contract
- consumer: creates new contract tests
- consumer: publish pact file
- consumer: build ok
- consumer: release STOP
- provider: build failed
workflow: consumer extends contract
- consumer creates new contract tests
- publish incubating pact file
- provider extends service
- consumer uses new feature
- consumer and provider use current pact
- release
pact: state
- provider and consumer define states
- setup url
- pact broker list already available states
- validates pact file against deployed sagger docs
- reduces feedback cycle
Spring Cloud Contract
- groovy dsl
- based on wiremock
- jvm only
- maven repo as exchange
- seamless integration
- MESSAGING Support
solves example problem
consumer driven contracts
rating
consumer driven contracts
- trustworthy
- cheep
- fast
- reliable
- targeted
Test execution and documentation
CI Server
- right place for automated test execution
- transparent results
- combination of multiple stages
- no interpretation: red / green / yellow ?
- alarms
- deployment
- documentation and workflow (ISO 27001, ITEL, ...)
testing in production
- feature toggles
- small selected set of affected
- strong monitoring and alarming
- fast deployment pipelines for fixes
blue-green
vs.
canary releases
blue-green
- two separated production environments
- role update on all nodes
- mark blue nodes as green and route all (new) users to it
canary releases
- update a few nodes
- monitoring this nodes
- monitor and role on more nodes until all are updated
- roleback if monitoring alerts
blue-green and canary releases
problems
- server side states
- complex
- compatibility
- databases changes
- external services
Shift-right
- balance testing <-> monitoring & alerting
- many problems couldn't captured by tests
No End-To-End Tests
any more???
Brave new DevOps world
- ambiguity and uncertainty
- someone may not feel comfortable in this!
don't leave your primary
business
left out in the cold!
E-E: journey tests
- very few tests
- most relevant use cases for business (risk based)
- limit to 10 (continuous deployment) or 20 minutes
- optimisation
- test on API instead of UI Level
- parallelisation
- reduction and focusing
system setup
- stable setup
- if using docker
- commit and push containers on failed builds to a registry
- log communication
- collect logs
Data
- create a test data management
- if delete: only delete created by it's own
- migrate same way production data-sources
- prepared data-sources
- ready to run docker container
rating
E-E: journey tests
- trustworthy
- cheep
- fast
- reliable
- targeted
responsibility
- not only by QA
- start earlier
- create a test plan over all layers
- limit E-E-Tests
costs of missing tests
developers
- fear changing anything
- very expensive on-boarding
costs of missing tests
customer
- frustrated
- alternative solutions
costs of missing tests
organisation
- change restriction management
- low speed
- to slow for market
conclusions
- focus on the basement
- explicit selection of test level
- unit
- small integration tests
- testing systems in isolation
- consumer driven contract testing
- E-E: journey tests
- have a big picture
your tests!
- it's a process
- you need to tweak it all the time
you can't prevent everything, be prepared
- Beratung, Coaching und Projektunterstützung
- Java EE
- Buildsysteme gradle und maven/ant-Migration
- Testautomatisierung
- Coach in agilen Projekten
- DevOps