testing Archives - SD Times https://sdtimes.com/tag/testing/ Software Development News Wed, 03 May 2023 14:42:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg testing Archives - SD Times https://sdtimes.com/tag/testing/ 32 32 Mabl’s load testing offering provides increased insight into app performance https://sdtimes.com/test/mabls-load-testing-offering-provides-increased-insight-into-app-performance/ Wed, 03 May 2023 14:42:41 +0000 https://sdtimes.com/?p=51072 Low-code intelligence automation company mabl today announced its new load testing offering geared at allowing engineering teams to assess how their application will perform under production load. This capability integrates into mabl’s SaaS platform so that users can enhance the value of existing functional tests, move performance testing to an earlier phase of the development … continue reading

The post Mabl’s load testing offering provides increased insight into app performance appeared first on SD Times.

]]>
Low-code intelligence automation company mabl today announced its new load testing offering geared at allowing engineering teams to assess how their application will perform under production load.

This capability integrates into mabl’s SaaS platform so that users can enhance the value of existing functional tests, move performance testing to an earlier phase of the development lifecycle, and cut down on infrastructure and operations costs.

“The primary goal is to help customers test application changes under production load before they release them so that they can detect any new bottlenecks or things that they would have experienced as the changes hit production before release,” said Dan Belcher, co-founder of mabl.

According to the company, these API load testing capabilities allow for the unification of functional and non-functional testing by utilizing functional API tests for performance and importing Postman Collections to cut down on the time it takes to create tests. 

Mabl also stated that this performance testing lowers the barrier to a sustainable and collaborative performance testing practice, even for teams that do not have dedicated performance testers or specific performance testing tools. 

“Anyone within the software team can use it, so it is not limited to just the software developers or just the performance experts,” Belcher said. “Because we’re low-code and already handling the functional testing, it makes it super easy for the teams to be able to define and execute performance tests on their own without required specialized skills.”

Furthermore, these tests can also be configured to run alongside functional tests on demand, on a schedule, or as a part of CI/CD pipelines. 

The post Mabl’s load testing offering provides increased insight into app performance appeared first on SD Times.

]]>
The four stages of mobile software testing maturity https://sdtimes.com/testing/the-four-stages-of-mobile-software-testing-maturity/ Mon, 17 Apr 2023 17:09:05 +0000 https://sdtimes.com/?p=50919 If you’re like most organizations that develop mobile apps, you have some kind of systematic mobile software testing in place. You might even be using automation frameworks to execute your tests, and you might be testing across a high number of devices, browsers and operating system versions. But if you think that makes you a … continue reading

The post The four stages of mobile software testing maturity appeared first on SD Times.

]]>
If you’re like most organizations that develop mobile apps, you have some kind of systematic mobile software testing in place. You might even be using automation frameworks to execute your tests, and you might be testing across a high number of devices, browsers and operating system versions.

But if you think that makes you a standout organization when it comes to mobile testing, think again. A fully developed mobile testing strategy includes more than just simple test automation and broad device coverage.

To me, the best way to explain what goes into the most effective mobile testing routines is to think in terms of a mobile testing maturity model. This article discusses what I see as the four maturity stages of mobile testing, and how organizations can advance their maturity by leveraging techniques that go above and beyond the basics.

The stages of mobile software testing

The approach that most businesses take today to mobile testing falls into one of the following four maturity stages.

Stage 1: No mobile testing plan

Some businesses have no systematic mobile testing strategy in place at all. They perform testing on an ad hoc basis, if ever.

Fortunately, most organizations today have evolved beyond this stage because they realize that having some kind of testing regime is critical for identifying problems that can undercut the user experience in mobile apps. That said, organizations that have only recently begun to develop mobile apps, or that don’t frequently update their apps, may not have a systematic approach for testing them.

Stage 2: Manual testing and low device coverage

Businesses at a slightly higher maturity stage of mobile testing perform tests on a routine basis, but they rely heavily on manual tests. They aren’t automating tests, which means the tests are inefficient to repeat. They also struggle to test across a wide range of mobile device, operating system and browser types because they lack the resources to run tests manually on many different types of environments.

Stage 3: Automated testing and high device coverage

When organizations advance to automated mobile testing using frameworks like Appium, they’re also to run tests across a wider variety of mobile environment configurations. That leads to better test coverage and a lower risk that users will run into problems on their particular devices that the business didn’t test for.

However, just because you’re testing across a wide variety of environments doesn’t mean you’re testing everything you should test. You might be running tests for only a subset of all available application functionality, for example. Or you might be ignoring certain categories of testing, like accessibility testing. These considerations may be overlooked because your business lacks the time or resources necessary to implement automated tests for every test priority.

A second pitfall that some businesses sometimes encounter when they automate mobile testing, but not in a mature way, is that they struggle to interpret test results. When a test fails, they can’t quickly determine which change in their application triggered the failure. Nor can they efficiently pull out data from their tests in order to assess things like exactly how long a page inside an app took to load or exactly when a crash occurred.

Stage 4: Testing for everything, everywhere

Businesses that overcome these challenges advance to the highest level of mobile maturity testing. They gain the ability to test every aspect of application functionality, perform every relevant category of test and run tests for every environment configuration that their users might encounter.

On top of that, they draw on automations not just to run tests, but also to help interpret test results. Rather than manually examining failed tests or parsing files to identify the timing of different events, they collect relevant information automatically, which saves time and helps them to scale even further.

Evolving your mobile test maturity level

Another way of framing mobile maturity testing is to say that the more your testers are leveraging automation, the more advanced their testing strategy is.

But to be clear, I’m not talking here just about automated testing frameworks. Those only get you to the third maturity level.

Instead, businesses should take advantage of automation tools that can do things like generate test scripts, which makes it easier to support more test cases, or automatically repeat tests from on other devices in order to maximize coverage without requiring manual deployment of every test.

When you automate all aspects of mobile testing – not just the execution of the tests themselves, but also the process of creating tests, deploying tests and interpreting test results – your mobile testing strategy becomes as efficient, scalable and mature as it can be. That’s ultimately what separates the best from the rest when it comes to mobile testing, and it’s what every business should strive for if it wants to delight users while also keeping testing operations efficient.

The post The four stages of mobile software testing maturity appeared first on SD Times.

]]>
Quality assurance assures great user experiences https://sdtimes.com/test/quality-assurance-assures-great-user-experiences/ Wed, 29 Mar 2023 15:14:59 +0000 https://sdtimes.com/?p=50729 The user experience has become critically important in today’s digital world, even as organizations struggle to align testing with the speed of delivery. Functional tests, performance tests and UI tests, among others, can reveal if an application isn’t behaving or performing as expected. But on their own, they can’t tell you if your user is … continue reading

The post Quality assurance assures great user experiences appeared first on SD Times.

]]>
The user experience has become critically important in today’s digital world, even as organizations struggle to align testing with the speed of delivery.

Functional tests, performance tests and UI tests, among others, can reveal if an application isn’t behaving or performing as expected. But on their own, they can’t tell you if your user is having a great experience. And as we know, a poor experience can lead to losing customers and revenue, as well as  damage your company’s reputation.

To ensure a good user experience, organizations need to understand their products, they need to know their markets and they need to have empathy for their users. Once that’s established, according to Gevorg Hovsepyan, head of product at test automation platform mabl, you need to make sure your testing strategy aligns with that.

“You need to have a good pulse on what your customers are experiencing, and the quality of that,” Hovsepyan said. “Because ultimately, your goal is to deliver a great customer experience. It’s not just to make sure your API endpoint provides the right JSON structure.” 

With changes in the markets and the need to have everything digital drive faster delivery and better experiences, you need to do UI testing to understand the performance, you need to understand the accessibility, and you need to appreciate the impact on the organization’s business and revenue if those things aren’t addressed, he said.

“For example,” Hovsepyan explained, “if you’re an airline and plan to offer discounted fares on a particular date, your website needs to be able to handle that surge in traffic.  If your website doesn’t perform to enable 10,000 people to buy those tickets, or 1,000 people to buy those tickets, then your bottom line takes a direct hit. Your CFOs and your executives will look at that and ask what happened, and those would-be customers are less likely to book another trip with you.” 

This has led to a shift in mindset to determine where – and how – you test the experience of your customer. It has become increasingly important for the entire organization to contribute to  quality.

Hovsepyan said mabl believes everyone in the organization should be able to participate in building high-quality software, and approaches testing  from a low-code perspective that enables product managers, business teams and engineers who wouldn’t always participate in quality to be able to quickly create tests or reports that are important to them.

Mabl sees quality engineering as a strategic practice that integrates testing into development pipelines to improve the customer experience and business outcomes. Similarly to DevOps, quality engineering seeks to bring teams from across the software development organization together to establish a shared understanding of quality and how everyone can contribute to it. 

Hovsepyan said that low-code test automation enables everyone to participate in testing and contribute to quality engineering, even if they don’t have a lot of coding experience.

“At mabl, we believe that quality is a combination of multiple things from functional to non-functional. So our solution is a modern SaaS cloud platform that unifies all testing capabilities.” Beyond functional testing, mabl has added visual testing, PDF testing, accessibility testing and performance reporting, bringing different testing capabilities into a single unified quality engineering platform that enables users to assess quality, he explained.

Taking steps toward quality engineering

Hovsepyan said first and foremost, organizations should start with a strategic mindset and seek to understand the state of your business, what your business is trying to accomplish, and how quality-related issues might contribute to your business performance – positively or negatively. “If you don’t do that,” he said, “selling your ideas down the road is going to get increasingly harder.”

Once you understand the state of the business, he advised doing a self-assessment to determine the state of quality within your company. “This doesn’t necessarily include understanding the quality of your technology,” he pointed out. “It’s also understanding your org structure, and the skill sets you have in your team. How do you see your plans developing? How can you broaden quality contributions so that testing matches the needs of your customers in the long-term?” 

Finally, he said, assess the maturity of your testing capabilities. Is the team mostly doing manual testing, or is some automation involved? Do you have scripts and infrastructure in place? Then, he concluded, look for modern technologies that are coming to market to help accelerate the journey toward quality engineering.

Content provided by SD Times and mabl

The post Quality assurance assures great user experiences appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Touca https://sdtimes.com/test/sd-times-open-source-project-of-the-week-touca/ Fri, 03 Mar 2023 14:06:54 +0000 https://sdtimes.com/?p=50462 Touca is a continuous regression testing tool that provides engineering teams with a real-time visual comparison of their software’s performance and behavior against a previous trusted version. This can help them identify any unintended side effects of their daily code changes. “It is still too difficult and time-consuming for software engineers to gain confidence in … continue reading

The post SD Times Open-Source Project of the Week: Touca appeared first on SD Times.

]]>
Touca is a continuous regression testing tool that provides engineering teams with a real-time visual comparison of their software’s performance and behavior against a previous trusted version. This can help them identify any unintended side effects of their daily code changes.

“It is still too difficult and time-consuming for software engineers to gain confidence in their day-to-day code changes. Most engineering teams suffer through long QA feedback cycles or resort to writing hard-to-maintain unit tests and integration tests. Sadly, for most types of software, writing reliable, automated, and developer-friendly tests is still very difficult,” Pejman Ghorbanzande wrote in a post. “Touca lets you describe the behavior and performance of any version of your software for any number of inputs.”

The testing tool submits a description written out by a software team to a remote server that automatically compares it against a previous baseline version and reports any differences.

It can then be used to share insights about new versions. 

Touca first started out as a regression testing solution for enterprise software companies building mission-critical systems but has now evolved to support individual developers and smaller teams, according to Ghorbanzande.

Version 2.0 was released last month and it contains an easy-to-self-host server that stores test results, a CLI that enables snapshot testing, four SDKs in Python, C++, Java, JavaScript, Test runner and GitHub action plugins, and more.

The post SD Times Open-Source Project of the Week: Touca appeared first on SD Times.

]]>
Speed – and other stuff – drives the need for test automation https://sdtimes.com/ai-testing/speed-and-other-stuff-drives-the-need-for-test-automation/ Wed, 22 Feb 2023 04:42:41 +0000 https://sdtimes.com/?p=50362 It started with working from home. That’s what fired off the rocket of digital transformation. People who converted to virtual interactions with their customers did well, and those who didn’t suffered. But to do so, and keep up with those virtual competitors, often meant exposing things before they were ready, or even fully thought out. … continue reading

The post Speed – and other stuff – drives the need for test automation appeared first on SD Times.

]]>
It started with working from home. That’s what fired off the rocket of digital transformation.

People who converted to virtual interactions with their customers did well, and those who didn’t suffered. But to do so, and keep up with those virtual competitors, often meant exposing things before they were ready, or even fully thought out. That led to a lot of technical debt, yet still didn’t calm the need for speed.

If you look to the Facebook-type models of extremely rapid releases, you’d need a highly scalable infrastructure with a rigorous testing environment – which on its face seems anathema to digital transformation –  to give you the ability to rapidly stand things up to test, and to perform those tests.

So, with your business online and on the line, it’s almost impossible to keep testing at a pace the business needs to adhere to without employing automation.

So said Arthur Hicken, evangelist at test solutions provider Parasoft, in a discussion we had leading up to this week’s Improve: Testing conference – at which Hicken will be presenting a session.

“You have to do things right now, you’re testing, and you’ve got to have a high degree of automation,” Hicken told SD Times. “You’ve got to have a high degree of confidence in that automation. And you’ve got to make sure that you can do everything you can not just to find bugs, but you’ve got to stop creating so many bugs in the first place.”

Parasoft has supported rigorous testing efforts for years, in medical devices and other areas where safety is critical – what Hicken likes to call “planes, trains and automobiles.” And in that space, he said, organizations are kind of slow to adopt new practices.

“It’s interesting because what happens is that we see the enterprise market looking more at the rigor because they need that mission-critical reliability,” Hicken said. “And we see the safety-critical market where the volume of code is exploding, we see them adopting. I mean, agile is becoming the norm. It’s not the disruptors, it’s the market leaders. DevOps, containerization, lightweight tools, CI/CD have all become the norm.”

Parasoft this year has been positioned among the leaders in the Forrester Wave for Continuous Automation Testing platforms, showing both a strong product offering and a strong strategy. Hicken said Parasoft has been hyper-focused on AI augmentation – not looking to build an AI “silver bullet,” but looking at real problems people have with test creation, test execution and test maintenance. He calls these, “all the ways to reduce the effort on developers, especially tedious efforts. And to give them guidance for things that might not be obvious.”

It also involves “the ability to do self-healing, and the test impact analysis so that when you do make a change, and you’re worried about is this change that I made going to break my online infrastructure, that we can give you the exact correct set of tests that make sure that that functionality is working properly, no more, no less,” he explained.

At the end of the day, Hicken said, “when you’re looking for continuous testing tools, you’re looking for something that can solve an actual problem you have. You’re not looking for, say, a service virtualization tool; you’re looking for a tool that can help you test before other components are ready. You’re not looking for a UI automation tool; you’re looking to make sure that your tool isn’t going to break down when you release it.” 

The post Speed – and other stuff – drives the need for test automation appeared first on SD Times.

]]>
Tricentis extends Testim platform to mobile devices https://sdtimes.com/test/tricentis-extends-testim-platform-to-mobile-devices/ Fri, 17 Feb 2023 17:30:59 +0000 https://sdtimes.com/?p=50350 Tricentis is attempting to meet the growing demand for high quality mobile applications by releasing Testim Mobile, a mobile extension to its testing platform Testim. According to Tricentis, testing for mobile applications can pose a lot of challenges, because unlike browsers, phones and tablets can vary widely in performance, size, and operating system.  With Testim … continue reading

The post Tricentis extends Testim platform to mobile devices appeared first on SD Times.

]]>
Tricentis is attempting to meet the growing demand for high quality mobile applications by releasing Testim Mobile, a mobile extension to its testing platform Testim.

According to Tricentis, testing for mobile applications can pose a lot of challenges, because unlike browsers, phones and tablets can vary widely in performance, size, and operating system. 

With Testim Mobile, testers can use either physical devices or emulators in testing, and tests can also be run in parallel across those different testing options. 

Devices can be set up and configured in minutes using the Tricentis Mobile Agent, which helps to also simplify device management. 

Testers using Windows are able to still test iOS devices from their laptops by connecting the devices to the Mobile Agent. 

For testers looking to utilize an emulator or simulator instead of physical devices, they can upload apps into the cloud and then share them with other team members too. 

It also allows for codeless authoring of tests, while traditionally mobile testing has required scripted or coded tests, adding an additional skills requirement to the testing process.

Users of Testim should have an easy time getting used to Testim Mobile, as it uses the same UI. 

“Ever-changing consumer expectations require organizations to frequently evolve their mobile applications, risking challenges due to a variety of operating systems, unstable network connections, geo-location capabilities, and more. As more mobile-first companies enter the market, organizations must ensure high performance, functionality, and usability for their apps to be successful,” said Suhail Ansari, chief technology officer at Tricentis. “Testim Mobile delivers innovative mobile testing capabilities that enable agile teams to quickly evaluate quality, debug failures, and use feedback to innovate on their applications through a continuous build and release cycle.”

The post Tricentis extends Testim platform to mobile devices appeared first on SD Times.

]]>
How to build trust in AI for software testing https://sdtimes.com/testing/how-to-build-trust-in-ai-for-software-testing/ Fri, 03 Feb 2023 15:38:09 +0000 https://sdtimes.com/?p=50238 The application of artificial intelligence (AI) and machine learning (ML) in software testing is both lauded and maligned, depending on who you ask. It’s an eventuality that strikes balanced notes of fear and optimism in its target users. But one thing’s for sure: the AI revolution is coming our way. And, when you thoughtfully consider … continue reading

The post How to build trust in AI for software testing appeared first on SD Times.

]]>
The application of artificial intelligence (AI) and machine learning (ML) in software testing is both lauded and maligned, depending on who you ask. It’s an eventuality that strikes balanced notes of fear and optimism in its target users. But one thing’s for sure: the AI revolution is coming our way. And, when you thoughtfully consider the benefits of speed and efficiency, it turns out that it is a good thing. So, how can we embrace AI with positivity and prepare to integrate it into our workflow while addressing the concerns of those who are inclined to distrust it?

Speed bumps on the road to trustville

Much of the resistance toward implementing AI in software testing comes down to two factors: a rational fear for personal job security and a healthy skepticism in the ability of AI to perform tasks contextually as well as humans. This skepticism is primarily based on limitations observed in early applications of the technology. 

To further promote the adoption of AI in our industry, we must assuage the fears and disarm the skeptics by setting reasonable expectations and emphasizing the benefits. Fortunately, as AI becomes more mainstream — a direct result of improvements in its abilities — a clearer picture has emerged of what AI and ML can do for software testers; one that is more realistic and less encumbered by marketing hype.

First things first: Don’t panic

Here’s the good news: the AI bots are not coming for our jobs. For as long as there have been AI and automation testing tools, there have been dystopian nightmares about humans losing their place in the world. Equally prevalent are the naysayers who scoff at such doomsday scenarios as being little more than the whims of science fiction writers.

The sooner we consider AI to be just another useful tool, the sooner we can start reaping its benefits. Just as the invention of the electrical screwdriver has not eliminated the need for workers to fasten screws, AI will not eliminate the need for engineers to author, edit, schedule and monitor test scripts. But it can help them perform these tasks faster, more efficiently, and with fewer distractions.

Autonomous software testing is simply more realistic — and more practical —  when viewed in the context of AI working in tandem with humans. People will remain central to software development since they are the ones who define the boundaries and potential of their software. The nature of software testing dictates that the “goal posts” are always shifting as business requirements are often unclear and constantly changing. This variable nature of the testing process demands continued human oversight.

The early standards and methodologies for software testing (including the term “quality assurance”) come from the world of manufacturing product testing. Within that context, products were well-defined with testing far more mechanistic compared to software whose traits are malleable and often changing. In reality, however, software testing is not applicable to such uniform, robotic methods of assuring quality. 

In modern software development, there are many things that can’t be known by developers. There are too many changing variables in the development of software that require a higher level of decision-making than AI can provide. And yet, while fully autonomous AI is unrealistic for the foreseeable future, AI that supports and extends human efforts at software quality is still a very worthwhile pursuit. Keeping human testers in the mix to consistently monitor, correct, and teach the AI will result in an increasingly improved software product.

The three stages of AI in software testing

Software testing AI development essentially has three stages of development maturity:

  • Operational Testing AI
  • Process Testing AI
  • Systemic Testing AI

Most AI-enabled software testing is currently performed at the operational stage. Operational testing involves creating scripts that mimic the routines human testers perform hundreds of times. Process AI is a more mature version of Operational AI with testers using Process AI for test generation. Other uses may include test coverage analysis and recommendations, defect root cause analysis and effort estimations, and test environment optimization. Process AI can also facilitate synthetic data creation based on patterns and usages. 

The third stage, Systemic AI, is the least tenable of the three owing to the enormous volume of training it would require. Testers can be reasonably confident that Process AI will suggest a single feature or function test to adequately assure software quality. With Systemic AI, however, testers cannot know with high confidence that the software will meet all requirements in all situations. AI at this level would test for all conceivable requirements – even those that have not been imagined by humans. This would make the work of reviewing the autonomous AI’s assumptions and conclusion such an enormous task that it would defeat the purpose of working toward full autonomy in the first place.

Set realistic expectations

After clarifying what AI can and cannot do, it is best to define what we expect from those who use it. Setting clear goals early on will prepare your team for success. When AI tools are introduced to a testing program, it should be presented as a software project that has the full support of management with well-defined goals and milestones. Offering an automated platform as an optional tool for testers to explore at their leisure is a setup for failure. Without a clear directive from management and a finite timeline, it is all too easy for the project to never get off the ground. Give the project a mandate and you’ll be well on your way to successful implementation. You should aim to be clear about who is on the team, what their roles are, and how they are expected to collaborate. It also means specifying what outcomes are expected and from whom. 

Accentuate the positive

Particularly in agile development environments, where software development is a team sport, AI is a technology that benefits not only testers but also everyone on the development team. Give testers a stake in the project and allow them to analyze the functionality and benefits for themselves. Having agency will build confidence in their use of the tools, and convince them that AI is a tool for augmenting their abilities and preparing them for the future.

Remind your team that as software evolves, it requires more scripts and new approaches for testing added features, for additional use patterns and for platform integrations. Automated testing is not a one-time occurrence. Even with machine learning assisting in the repairing of scripts, there will always be opportunities for further developing the test program in pursuit of greater test coverage, and higher levels of security and quality. Even with test scripts that approach 100 percent code execution, there will be new releases, new bug fixes, and new features to test. The role of the test engineer is not going anywhere, it is just evolving.

Freedom from the mundane

It is no secret that software test engineers are often burdened with a litany of tasks that are mundane. To be effective, testing programs are designed to audit software functionality, performance, security, look and feel, etc. in incrementally differing variations and at volume. Writing these variations is repetitive, painstaking, and—to many—even boring. By starting with this low-hanging fruit, the mundane, resource-intensive aspects of testing, you can score some early wins and gradually convince the skeptics of the value of using AI testing tools. 

Converting skeptics won’t happen overnight. If you overwhelm your team by imposing sweeping changes, you may be setting yourself up for failure. Adding AI-assisted automation into your test program greatly reduces the load of such repetitive tasks, and allows test engineers to focus on new interests and skills.

For example, one of the areas where automated tests frequently fail is in the identification of objects within a user interface (UI). AI tools can identify these objects quickly and accurately to bring clear benefit to the test script. By focusing on such operational efficiencies, you can make a strong case for embracing AI. When test engineers spend less time performing routine debugging tasks and more time focusing on strategy and coverage, they naturally become better at their jobs. When they are better at their jobs, they will be more inclined to embrace technology. 

In the end, AI is only as useful as the way in which it is applied. It is not an instantaneous solution to all our problems. We need to acknowledge what it does right, and what it does better. Then we need to let it help us be better at our jobs. With that mindset, test engineers can find a very powerful partner in AI and will no doubt be much more likely to accept it into their workflow.

The post How to build trust in AI for software testing appeared first on SD Times.

]]>
Tricentis Test Automation marries low-code with testing https://sdtimes.com/test/tricentis-test-automation-marries-low-code-with-testing/ Mon, 30 Jan 2023 14:00:47 +0000 https://sdtimes.com/?p=50158 Tricentis Test Automation is a new SaaS-based solution that supports enterprise app, API, and business process testing.  “While organizations are building their businesses and deploying applications on the cloud, most teams are constrained by legacy processes which are creating slow, error-prone, and costly challenges due to the lack of a viable cloud-based testing solution,” said … continue reading

The post Tricentis Test Automation marries low-code with testing appeared first on SD Times.

]]>
Tricentis Test Automation is a new SaaS-based solution that supports enterprise app, API, and business process testing. 

“While organizations are building their businesses and deploying applications on the cloud, most teams are constrained by legacy processes which are creating slow, error-prone, and costly challenges due to the lack of a viable cloud-based testing solution,” said Suhail Ansari, the chief technology officer at Tricentis. “Tricentis Test Automation enables organizations to automate end-to-end quality for their integrated cloud-based solutions with faster speeds, no-code, and reduced test maintenance costs.”

Businesses can use it to test end-to-end business processes and complex business applications. They can also verify quality across their integrated platforms. 

The user-friendly SaaS-based solution allows users to quickly create automated tests without prior coding or test automation knowledge, ranging from functional UI to API/microservices testing and enables them to scale up accordingly.

With model-based UI test automation, users can build codeless, resilient, automated tests through a unique approach that separates the automation model from the underlying application.

Teams can text faster and at scale by running multiple tests in parallel across distributed infrastructures and VMs. Users can define test data and environmental coverage to prepare, configure, and orchestrate test cases for multi-app, end-to-end process testing.

The post Tricentis Test Automation marries low-code with testing appeared first on SD Times.

]]>
Panaya announces SAP S/4HANA migration toolkit https://sdtimes.com/test/panaya-announces-sap-s-4hana-migration-toolkit/ Tue, 24 Jan 2023 18:21:56 +0000 https://sdtimes.com/?p=50137 Panaya announced its SAP S/4HANA migration toolkit, the Panaya 360 suite, to provide companies with tools for gaining full coverage, visibility, and control as they make this migration. This suite is designed to meet the needs of those looking for a way to make SAP S/4HANA migrations simpler and less disruptive to business operations. It … continue reading

The post Panaya announces SAP S/4HANA migration toolkit appeared first on SD Times.

]]>
Panaya announced its SAP S/4HANA migration toolkit, the Panaya 360 suite, to provide companies with tools for gaining full coverage, visibility, and control as they make this migration.

This suite is designed to meet the needs of those looking for a way to make SAP S/4HANA migrations simpler and less disruptive to business operations. It can speed up the process and minimize the risks by accurately pinpointing the project’s scope and the processes that must be altered.

The suite also includes SAP S/4HANA system conversion, version upgrades, and ongoing business changes.

Organizations can take advantage of Panaya to make informed decisions regarding upgrades and conversions through the use of sophisticated landscape intelligence. Panaya’s platform for SAP business process test management is designed to provide artificial intelligence and collaborative features between business and IT for comprehensive testing.

Users can gain a comprehensive understanding of all project activities, efforts, estimations, optimization factors, remediation, and testing activities within 48 hours, without the need for system integrators or specialist in-house knowledge.

Additional details are available here. 

The post Panaya announces SAP S/4HANA migration toolkit appeared first on SD Times.

]]>
2023: The Year of Continuous Improvement https://sdtimes.com/devops/2023-the-year-of-continuous-improvement/ Fri, 13 Jan 2023 18:11:26 +0000 https://sdtimes.com/?p=50071 March 13, 2020. Friday the 13th. That’s when a large number of companies shut their offices to prevent the spread of a deadly virus – COVID-19. Many thought this would be a short, temporary thing.  They were wrong. The remainder of 2020 and 2021 were spent trying to figure out how to get an entire … continue reading

The post 2023: The Year of Continuous Improvement appeared first on SD Times.

]]>
March 13, 2020. Friday the 13th. That’s when a large number of companies shut their offices to prevent the spread of a deadly virus – COVID-19. Many thought this would be a short, temporary thing. 

They were wrong.

The remainder of 2020 and 2021 were spent trying to figure out how to get an entire workforce to work remotely, while still being able to collaborate and innovate. Sales of cloud solutions soared. Much of the new software companies invested in required training just to get up to speed.

But training in the form of in-person conferences ceased to exist, and organizers sought to digitalize the live experience to closely resemble those conferences.

Fast forward to 2023. The software and infrastructure organizations have put in place enabled them to continue to work, albeit not necessarily at peak performance. Most companies today have figured out the ‘what’ of remote work, and some have advanced to the ‘how.’

But this move to a digital transformation has provided organizations with tools that can help them work even more efficiently than they could when tethered to an on-premises data center, and are only now just starting to reap the benefits. 

Thus, the editors of SD Times have determined that 2023 will be “The Year of Continuous Improvement.” It will, though, extend beyond 2023.

Bob Walker, technical director at continuous delivery company Octopus Deploy, said, “The way I kind of look at that is that you have a revolution, where everyone’s bought all these new tools and they’re starting to implement everything. Then you have this evolution of, we just adopted this brand new CI tool, or this brand new CD tool, whatever the case may be. And then you have this evolution where you have to learn through it, and everything takes time.”

Development managers, or a team of software engineers, or QA, have to worry about making sure they’re delivering on goals and OKRs, to ensure the software they deliver has value. So, Walker noted, “it’s a balance between ‘what can we do right now’ versus ‘what can we do in a few month’s time’? What do we have right now that is ‘good enough’ to get us through the next couple of weeks or the next couple months, and then start looking at how we can make small changes to these other improvements? It can be a massive time investment.”

Show me the metrics

Continuous improvement begins with an understanding of what’s happening in your product and processes. There are DevOps and workflow metrics that teams can leverage to find weaknesses or hurdles that slow production or are wasteful time sucks, such as waiting on a pull request. 

Mik Kersten, who wrote the book “Project to Product” on optimizing flow, holds the view that continuous improvement needs to be driven by data. “You need to be able to measure, you need to understand how you’re driving business outcomes, or failing to drive business outcomes,” he said. “But it’s not just at the team level, or at the level of the Scrum team, or the Agile team, but the level of the organization.”

Yet, like Agile development and DevOps adoption, there’s no prescription for success. Some organizations do daily Scrum stand-ups but still deliver software in a “waterfall” fashion. Some will adopt automated testing and note that it’s an improvement. So, this begs the question: Isn’t incremental improvement good? Does it have to be an overarching goal?

Chris Gardner, VP and research director at Forrester, said data bears out the need for organization-wide improvement efforts, so that as they adopt things like automated testing, or value stream management, they can begin to move down the road in a more unified way, as opposed to simply being better at testing, or better at security.

“When we ask folks if they’re leveraging DevOps or SRE, or platform methodologies, the numbers are usually pretty high in terms of people saying they’re doing it,” Gardner said. “But then we ask them, the second question is, are you doing it across your organization? Is every application being supported this way? And the answer is inevitably no, it’s not scaled out. So I believe that continuous improvement also means scaling out success, and not just having it in pockets.”

For Gardner, continuous improvement is not just implementing new methodologies, but scaling the ones you have within your organization that are successful, and perhaps scaling down the ones that are not. “Not every approach is going to be a winner,” he said. 

Eat more lean

Agile programming, DevOps and now value stream management are seen as the best-practice approaches to continuous improvement. These are based on lean manufacturing principles that advanced organizations use to eliminate process bottlenecks and repetitive tasks.

Value stream management, particularly, has become a new driver for continuous improvement.

According to Lance Knight, president and COO of VSM platform provider ConnectALL, value stream management is a human endeavor performed with a mindset of being more efficient. “When you think about the Lean principles that are around value stream management, it’s about looking at how to remove non-value-added activities, maybe automate some of your value-added activities and remove costs and overhead inside your value stream.”

Value stream management, he noted, is a driver of continuous improvement. “You’re continually looking at how you’re doing things, you’re continually looking at what can be removed to be more efficient,” he said.

Knight went on to make the point that you can’t simply deploy value stream management and be done. “It’s a human endeavor, people keep looking at it, managing it, facilitating it to remove waste,” he said. So, to have a successful implementation, he advised: “Learn lean, implement, map your value stream, understand systems thinking, consistently look for places to improve, either by changing human processes or by using software to automate, to drive that efficiency and create predictability in your software value stream.”

At software tools provider Atlassian, they’re working to move software teams to mastery by offering coaching. “Coach teams help [IT teams] get feedback about their previous processes and then allow for continuous improvement,” said Suzie Prince, head of product, DevOps, at Atlassian. In Compass, Atlassian’s developer portal that provides a real-time representation of the engineering output, they’ve created CheckOps, which Prince described as akin to a retrospective. “You’re going to look at your components that are in production, and look at the health of them every day. And this will give you insights into what that health looks like and allow you again to continuously improve on keeping them to the certain bar that you expect.”

Another driver of continuous improvement, she said, is the current economic uncertainty. With conditions being as they are, she said, “We know that people will be thinking about waste and efficiency. And so we also will be able to provide insights into things like this continuous flow of work and reducing the waste of where people are waiting for things and the handoffs that are a long time. We want to use automation to reduce that as well. All which I think fits in the same set of continuously improving.”

Key to it all is automation

Automation and continuous improvement are inexorably tied together, heard in many conversations SD Times has had with practitioners of the course of the year. It is essential to freeing up high-level engineers from having to perform repetitive, mundane tasks as well as adding reliability to work processes.

So whether it’s automation for creating and executing test scripts, or for triggering events when a change to a code base is made, or implementing tighter restrictions on data access, automation can make organizations more efficient and their processes more reliable.

When starting to use automation, according to John Laffey, product strategy lead at configuration management company Puppet (now a Perforce company), you should first find the things that interrupt your day. “IT and DevOps staffs tend to be really, really interrupt- driven, when I got out and talk to them,” he said. “I hear anything from 30% to 50% of some people’s time is spent doing things they had no intention of doing when they logged on in the morning. That is the stuff you should automate.” 

By automating repetitive little things that are easy fixes, that’s going to start freeing up time to be more productive and innovative, Laffey said. On the other hand, he said there’s not point in automating things that you’re going to do once a month, “I once had a boss that spent days and days writing a script to automate something we did like once a quarter that took 15 minutes. There’s no return on investment on that. Automate the things that you can do and that others can use.”

The post 2023: The Year of Continuous Improvement appeared first on SD Times.

]]>