TSQA 2022 Conference Logo

All times in Eastern Standard Time Zone. All sessions are live and online!

Can't make some of the sessions? No worries - the event will be recorded and sessions will be shared with ticketed attendees to watch at their convenience!

 

march 9 9:00-10:30am

Keynote

Open Testing: What if we opened our tests like we open our source?

Andrew Knight, Developer Advocate, Applitools

 

Sessions

 

march 9 10:45-11:30am

  • “In our craft, has the wonder that we can NEVER we run off thinks to learn, something new will always come out that we can learn to improve and evolve as testers we can continue improving our skills until the end of time.” This applied from a newcomer to a veteran.

    In our line of work, it is very easy to fall into the comfort zone this is not a bad thing but if we are not careful this can make us lose the perspective of our career since many times we can fall into the "monotony" very easy of the activities that we have to carry out and this often leads us from just having a lack of interest of learning new things. Being stuck in our comfort zone in a bad way often leads us to create a state of disinterest towards improving ourselves and not wanting to take risks that would help us grow as professionals. One of the repercussions that we explore is that this cause that the knowledge we have is not always enough so we can carry tests that can help our application.

    Considering this, I will present the benefits of extending our knowledge and show that staying in your comfort zone doesn't mean being stuck or being lazy; you can still try new things and evolve. Finally, attendees will know “how to start” reasons and resources that will let them give their first steps into this new mindset.

    Eugenio Elizondo Perales, Sr Automation QA Analyst, EpicorSoftware

  • Is it the future? Is it the present? Is it really codeless? Can anyone really automate? Are the tests maintainable? Is it scalable? 🤔

    We have heard these and many other things about codeless test automation. Join us in this presentation to discover myths and the realities of codeless test automation.

    Ivan Barajas Vargas, MuukTest

  • Microservices are becoming more prevalent. External behavior of an application depends on multiple services working together. Each service needs to be checked that it both provides the desired behavior as well as handles exceptions and error conditions, such as the inability to communicate with a dependency. Interactions between these services needs to be checked and monitored. Checking behavior does not stop at deployment but needs to continue after release.

    From a testing perspective, microservices can be viewed both as mini-applications with external behavior and as internal components. For applications, the externally facing triad (tester, developer, and customer) collaborates to create tests for behavior; for internal components, a different triad (tester, consumer, producer) generates these tests. To properly test microservices, we need to see, feel, touch, heal, and explore them.

    Ken Pugh, Ken Pugh Inc.

  • For teams who have adopted Agile - or are moving in that direction - testing can still hinder product velocity. To achieve excellent application quality, we need testing that is intelligent and accessible to the whole team. In this session, we’ll discuss the role low-code and intelligence play in test automation, and how mabl can help you increase test coverage and overall product quality by ultimately removing the testing bottleneck.

    Andy Horgan, Solutions Engineer, mabl

 

march 9 12:30-1:15pm

  • Many modern web apps are made with popular Javascript frameworks such as Angular, React, and Vue. We are also seeing many exciting new testing tools play well within the Javascript ecosystem. Cypress enables both developers and testers to work together and write all types of tests: Unit, Integration, and End-to-end tests. We’ll explore a few Cypress features together with a small demo.

    Jian Gao, Software Engineer in Test, logDNA

  • There are many different viewpoints when it comes to developing code. But most companies can agree on time to market and quality code as two most important aspects of development. For time to market, many companies built CI/CD pipelines to automate the build/integrate and deploy process. What about quality code? How can companies ensure quality in the CI/CD pipeline? One way to address this challenge is to optimize the quality stages of the CI/CD pipeline. Companies can improve their static code analysis, unit testing, code coverage, and post-deployment testing stages to ensure code quality is automatically built into the process.

    Join David Dang as he explores the best practices of implementing quality stages in the CI/CD pipeline and why it’s an important step towards DevOps maturity. He explains the value of static code analysis tools; check for code quality and cyber security vulnerabilities. The importance of writing good unit testing scenarios; positive, negative, and boundary tests. The usage of code coverage to gain insight into under coverage code. And leveraging post-deployment testing, backend, services, and UI, to gauge stability of the application. David's session will provide tips and tricks to ensure you're implementing your quality stages properly, and interesting anecdotes from customers he has helped get back on track.

  • Everyone likes to feel like the hero. Nothing feels better than charging in on your horse and rescuing the town’s folk from danger! But have you ever considered that playing the hero can harm both you and your team?

    Agile requires us to eliminate silos, share information, and share the load. As an individual you can’t keep the town safe by yourself, the bigger the team, the more people need to be able to be self-sufficient. We need to reframe our mindsets from lone cowboy saving the day, to an agile posse protecting the town from outlaw bugs. And while being the hero can feel good in short doses, saving the day can become exhausting for you and disempower your team.

    Jenna Charlton, Product Owner, Functionize

  • Testing is interaction plus verification. That’s it – you do something, and you make sure it works. You can perform those two parts manually or with automation. An automated test script still requires manual effort, though: someone needs to write code for those interactions and verifications. For web apps, verifications can be lengthy. Pages can have hundreds of elements, and teams constantly take risks when choosing which verifications to perform and which to ignore. Traditional assertions are also inadequate for testing visuals, like layout and colors. That’s lots of work for questionable protection.

    There’s a better way: automated visual testing. Instead of writing several assertions explicitly, we can take visual snapshots of our pages and compare them over time to detect changes. If a picture is worth a thousand words, then a snapshot is worth a thousand assertions. In this talk, I’ll show you how to do this type of visual testing with Applitools. We’ll automate a basic web UI test together using traditional techniques with Selenium WebDriver and Java, and then we’ll supercharge it with visual snapshots. We’ll see how Applitools Visual AI can pinpoint meaningful differences instead of insignificant noise. We’ll also see how Applitools Ultrafast Test Cloud can render those snapshots on any browser configuration we want to test without needing to rerun our tests in full. By the end of this talk, you’ll see how automated visual testing will revolutionize functional test automation!

    Andrew Knight, Developer Advocate, Applitools

 

march 9 1:30-2:15pm

  • When I first started as a tester at my current company over 7 years ago my team was contributing code to a monolithic application. Soon after we started building our first microservice (yay) and moving towards a service-oriented architecture. Then, in what seemed like the blink of an eye, we went from one microservice to hundreds of them. It was then that I realised for the first time that the foundation of how the organisation worked was changing drastically. As a tester I went from owning my component to owning my team’s components to now owning all the components. We are no longer working in siloed teams but in squads across tribes in an internal open source model.

    So now what? How do you tackle all the changes? How has the tester role changed? What are the new responsibilities and how do they fit in the overall picture? How do you work inside the squad, with other squads and other tribes? How does a feature get into production, what does automation look like? How about releasing and support or incident management?

    Join me in this session where I will be sharing my personal experiences of recognising a pattern when organisations are changing, navigating through them and adapting ways of working to the new model. It’s about finding your voice and role in an ever changing world.

    Raluca Morariu, Delivery Manager, Betfair Romania

  • Software tests can be simplified to a pattern of action > action > action --> expected results. With the embedding of machine learned algorithms, repeated actions may actually change the expected result, rendering our test plans useless. Susan digs in on the difference between Objective and Subjective test results, strategies for validating algorithm efficacy, and design patterns using machine learning.

    Susan Marie, Director of Quality, Parata Systems

  • A good tester advocates for the user. A great tester knows what that user wants and why. Stop making assumptions about your users!

    Learn how to use the analytics reports from tools like Google Analytics, Adobe Analytics, and CoreMetrics to inform all of your testing and development:

    – Pay attention to the analytics that are generated for the site or app you’re working with

    – The right analytics can inform your testing and shore up (or break!) any assumptions you may be making about your users

    – Learn which paths users are really taking through your site, so you can change or update your regression and automation testing (users may not be doing what you and the devs expect them to do!)

    Journey Becker, Quality Engineering Lead, The Zebra

  • For many years, software teams have been told that “verification” means “did we build it right” and “validation” means “did we build the right thing”…that unit testing comes before acceptance testing…that you can’t possibly validate something before it exists. Lies, damned lies and politics…perpetuated by a lack of a third V-word: “volition”, the will to do and change. A true shift-left mindset extends much farther than you might think, even deep into project and product planning, when there is volition to get great software to customers rapidly.

    In this presentation, Head of Incubation Engineering at Tricentis and life-long testing nerd Paul Bruce, will discuss how these three V-words are mandatory for modern software engineers, product owners/managers, and corporate leadership to exercise throughout the software lifecycle, to transform how we build [what] better [means] together. We’ll go through concepts and practical examples of how to do Validation before Verification, grow Volition in colleagues and teams, improve your automation strategy using these concepts, and how to pragmatically measure progress in all three.

    Paul Bruce, Director of Incubation Engineering, Tricentis

 

march 9 2:30-3:15pm

  • Modern web applications demand more and more challenges when developing, testing and using them, and one of those challenges that shapes the path to an inclusive application is accessibility.

    The funny thing is that sometimes accessibility can be seen as a stone in the shoe, more work, difficult to achieve or simply something that does not add value, in this talk we will talk about that, that set of proposals through ideas, tools and methodologies to make an application much more accessible when developing and testing it.

    In addition to that we will try to put ourselves in the shoes of the other side of the coin, such as how a color blind person would see your application or how a blind person could access it, among other cases that show us the importance of creating applications with an accessible heart.

    Sergio Riveros, Senior Software Engineer, Adidas

  • Testing is like singing - everyone can sing, but everyone won’t be paid to sing in front of a large audience. To create products where some of them eventually are of Mars Rover -quality, Vaisala can’t have just anyone testing but we are seeking results - finding information that matters from our customer’s perspective.

    We’ve moved in many cases from requirements to test cases analysis to more results-focused contemporary exploratory testing - including documenting with automation. Now, next up on my agenda, is an important product in maintenance with 100% of team members changed, nearly 5000 test cases inherited and we’re making sense of how to turn this around towards the results we need - in contemporary exploratory testing.

    We’ll look through how we test and why, what we’re happy with and what we’re working on, to give you one view into how we intertwine delivering and improving from a test perspective.

    Maaret Pyhäjärvi, Principal Test Engineer, Vaisala

  • Description coming soon

    Jenny Bramble, Director of Quality Engineering, Papa


march 10 9:00-10:30am

Panel

 

Orchestration of Test: How do we go from the wild, wild test to a quality-focused organization?

Panelists: Susan Marie, Leandro Melendez, Joel Montvelisky

Sessions

 

march 10 10:45-11:30AM

  • As startups start to mature, one of the biggest trends is the decision to build a Quality Program from the ground up. However, these smaller, agile, and more devops centric companies aren’t looking to hire a horde of testing specialists who act as a safety net - waiting for code to be tossed over the proverbial wall by developers to perform validation before a release can happen. Instead, engineering leadership are looking for a modern Quality Program that is significantly smaller and more specialized. They want Quality Leaders that rely on technology, push a concept of “Quality Culture,” and are pseudo Quality Program Managers to ensure that engineering organizations successfully deliver value to their customers.

    This talk takes you on the journey of why Quality Programs are changing for startups, what are some takeaways and strategies for starting a Quality Program from scratch, and what examples of key programs and technologies you will probably need to mature your Quality Program to make it scale successfully.

    Jeff Sing, Sr Engineering Manager, Quality & Operations, Iterable

  • Breaking news! Automation development is software development. Yeah, it's true. Even if we are using a drag-and-drop or record-and- playback interface to create that automation, somewhere, in the stack, under the hood or behind the curtain, there is code sequenced by our actions. We must start treating our automation initiatives as software development initiatives, lest we end up in a quagmire of unsustainability and early project death.

    Automation activities that aren't treated as software activities run the risk of being underestimated, delivered late, and being difficult to maintain; each of these scenarios takes a bite out of our budget. Join Paul Grizzaffi as explains why automation really is software and the key points of software development that we should keep in mind when creating automation software.

    Paul Grizzaffi, Principal Automation Architect, Cognizant Softvision

  • In the past decade the software development paradigm has shifted to “deliver fast” -- with concomitant frameworks and methodologies to support that emphasis – but without proper consideration of quality. So most teams end up failing fast and hard when development continues beyond a shaky foundation. To bring about positive change, we must improve both our knowledge base and our processes to achieve quality delivery without disturbing the bookkeeper’s project delivery timelines. Lessons learned from a career in research science can be applied to QA, with parallels to industry product quality models. Testing techniques and product delivery processes from research science will aid not just testers but the entire team in delivering quality software. More than just day-to-day team activities and testing tools, the science of testing is about the pursuit of knowledge and understanding for its own sake. Testers should foster their skills in the community with professional development activities. Those in attendance will learn about the successes and failures of applying a scientist’s approach to testing software, from the “publish-or-perish” mindset of science to “deliver fast” in IT.

    Thomas Haver, Test Automation Architect, M&T Bank

 

march 10 12:30-1:15pm

  • After having a lot of experiences regarding challenges in both technical and social manners and solutions against them collected from various projects, I have wrapped a list of golden rules to successfully manage a test project.

    In this session, we will talk about the most common challenges and solutions against them to achieve a strong QA environment.

    Mesut Durukal, QA Chapter Lead, Rapyuta Robotics

  • Software Organizations are investing heavily in modernization Testing Practices and Tools to align with DevOps & Agile. But they are heavily challenged with the manual reporting mode, weak governance, low focus on continuous improvement, fragmented tools, etc. Value Stream Maps helps an organization analyze, capture, and visualize metrics related to key IT processes. The VSM’s help identifies bottlenecks in your processes and estimates the quantitative benefits of reducing rework, eliminating wait time, improving productivity, etc.

    George Ukkuru, Sr Principal Architect, Marlabs Inc.

  • Have you ever wondered why these modern methodologies do not seem to make things easier for your teams? Why is that QA is even under more pressure and trying to keep up?

    In this presentation, Leandro will give details and examples of the main differences that software development has gone through in the last few years. He will help the audience understand the main differences in the old and new approaches while providing insights to identify and propose where the steps taken may be misguided.

    Last, he will be explaining why QA must be addressed and understood from a different perspective than the concepts we have been following in past iterations of the software development practices.

    Leandro Melendez, Performance Advocate, Grafana-K6

  • We’ve all had conversations about tech debt in sprint planning and retrospectives. We often talk about how we must “pay it down”. But have you ever thought about your test debt? What is test debt and where does it come from? How does test debt impact our ability to deliver quality software?

    Join Elise Carmichael from Functionize to get real about your test debt and discuss meaningful strategies to tackle test debt and learn how you can leverage the power of AI and ML to maximize the impact of your testing.

 

march 10 1:30-2:15pm

  • Really hard.

    There are hundreds of variables that affect the end user's perceived performance. None of the perceived performance is measured with traditional load testing tools. This means that, as performance testers, we're forced to add too many asterisks to our results... Until now!

    In this talk, we'll discuss frontend performance testing and the metrics that matter. We'll gather usage patterns and simulate users with k6.io. Next, we'll drive and capture real user performance metrics with puppeteer and browserless. Lastly, we'll show how you can integrate these tools into your existing performance testing or monitoring frameworks.

    John Hill, Web UI Test Engineer, KBR, Inc.

  • Automation has gone from optional to mandatory in the past few years when it comes to developing software at speed. It has led teams and especially testers to adapt and evolve together with new technologies for coping with the automation needs.

    No matter the original motivation, you might have somehow ended up crafting a strategy for doing test automation.

    Now the question is, how did it mature? When was the last time you actually took a moment to do a little retrospective regarding your automation strategy? More so, when was the last time that someone reviewed the scripts themselves?

    We will share our experience reviewing the test strategy of multiple projects and teams, paying special attention to the quality of our automation efforts. By doing this we will try to show you how every detail counts, since asking the right questions at the right time, validating the way we are picking our selectors, making sure there is proper communication between the automators and the rest of the team, to taking a step back when it is necessary, to assess the current situation and how could be improved if it could be or changed towards a different direction.

    Federico Toledo, COO at Abstracta

    Matias Fornara, Lead Quality Engineer at Abstracta

  • The Test Orchestration Process is a new methodological approach aiming to correctly coordinate all aspects of quality and testing, across multiple players and teams, generating levels of synchronization and value that are way higher than the sum of all the testing tasks being conducted and reported individually by our teams.

    The test orchestration approach consists of three separate phases and ten individual tasks or assignments that when used together can help you and your teams achieve and surpass your quality and visibility objectives.

    During this session we will review each of these tasks, review examples of how they can be implemented, and understand how you can include them as part of your overall generic testing approach.

    Joel Montvelisky, Co-founder & Chief Solution Architect, Practitest

  • Has your software testing environment thrown you out of the saddle and caused you to miss your targets? If you encounter delays getting access to test data, external APIs, and unstable endpoints in your test environment, use simulation to create virtual systems that behave just like the real things! With virtual assets and data, your testing can continue without a hitch.

    Join this session and I’ll show you how to wrangle your wayward APIs for continuous testing with simulation and corral your virtual assets in a reusable sandbox test environment.

    You’ll also learn how to keep the test chute moving with synthetic data, then circle back to management with test results reporting to understand coverage gaps and quality risks. With these techniques, you can hang your hat on a successful continuous testing process that meets quality and delivery goals.

    Grigori Trofimov, Solutions Architect, Parasoft

 

march 10 2:30-3:15pm

  • How many times have you heard statements like, "How did we miss this in test", "This defect doesn't happen on my machine", "We do not need automation", or "We are waiting on testing to give us the green light"?

    Far too often, teams are spending a lot of time diving into discussions that they no longer need to have. This builds frustration, affects team alignment, and can potentially impact the quality and milestones of the project.

    Through the years, we have seen a need for processes, tools, and old- school approaches to change. Discussions we needed to have years ago are no longer the same now. There is a need for strategic changes in how we operate within a project and how we communicate across teams.

    In this HIGHLY INTERACTIVE presentation, we will discuss many of the well-known phrases, philosophies, and theories around testing of years past, and how that we must overcome the obstacles and be successful today. We will discuss how the dynamics within the teams must change, and most importantly, how you, as a tester, can influence across the organization.

    Mike Lyles, Director of QA and Project Management, Bridgetree

  • “What testing framework should we use" is a question I hear more and more within engineering teams. Not all testing frameworks can suit your quality needs, so how do you decide which one to use? I was recently asked this question and decided, instead of settling for an existing framework, why not build my own! Thus, the FRANC (Full Risk Analysis of New Concept or Concern) test framework was born!

    FRANC uses a risk-basted testing approach, taking into account business, product, development, and testing assumptions and mapping them to agreed upon points of risk. Where this framework truly shines is in the ability to create tight feedback loops. This results in catching missed acceptance criteria and changes to acceptance criteria prior to a product's launch.

    I uncovered enlightening discoveries while building this framework. Experiencing the benefits of having full control of a framework and it's drawbacks enabled me to expand my testing knowledge beyond anything I could have imagined.

    Monica Friend, OppFind

  • Organizations talk about delivering high-quality products. The business cares about the quality of delivered features. Our customers demand quality in the product they are buying. Delivery teams talk about topics like internal code quality, testing, DevOps culture and how to improve their processes.

    With so many different perspectives, it’s no wonder we get confused about how to deliver that product that will delight our customers. In this talk, Janet shares her experiences about how to look at quality from a holistic point of view – for both product and process quality. Testing activities support the level of quality needed, and this includes the collaborative effort of the whole team. Join Janet to learn a new, holistic perspective to quality.

    Janet Gregory, Agile Testing and Process Consultant, Dragonfire Inc.

  • The SDET, a mythical creature found in fiction novels and fairytales. Never to be found or heard from at dev conferences. However, the SDET creature is one of importance as it is capable of coding and testing applications - two skills that are rarely visible in nature. For the first time in history, let’s join together to watch this creature as it takes on its dev form to code a React web app. Afterwards, the SDET magically transforms into its tester form to think about a solid testing strategy. Somehow the SDET is able to ensure that the app is working functionally and also looks great visually. A rare creature that can code both functional Cypress tests as well as visual WebdriverIO tests.

    How did it do that?

    Astonishing footage will reveal:

    How to create a React web application

    How to create a test strategy for a web applications

    How to perform visual e2e testing

    Learn how to perform functional cypress testing

    You won’t want to miss this exciting and rare showing of the mythical SDET

    Nikolay Advolodkin, Sauce Labs


meet our speakers

thank you to our

#TSQA2022 conference sponsors!