WavebreakmediaMicro - Fotolia

Problem solve Get help with specific problems with your technologies, process and projects.

An insider's guide to the AI and IoT testing process

Testing the internet of things is one thing, but AI takes it to the next level. A LogiGear executive shares what the company learned from its first serious foray into this world.

For software testers, change is inevitable and unlikely to stop anytime soon. And nowhere is that more true than when it comes to AI and the IoT testing process. For an inside look at what this is really going to be like, we asked Phuoc Nguyen, software testing engineer at LogiGear, about the company's recent experience testing an AI/IoT product from gaming company Anki. In the first part of this two-part series, we asked Anki Test Director Jane Fraser how it all worked from her perspective. Nguyen offers an insider's very detailed look at the AI and IoT testing process.

Is this your first serious foray into an AI/IoT testing process? What lessons can you share for other companies struggling to test these cutting-edge technologies?

Phuoc Nguyen: We have completed testing for other clients' embedded systems, but this is our first serious foray into the AI/IoT testing process. The first game we tested was a racing game where the robotic car is built with AI using the client's application on iOS and Android smartphones and tablets.

We learned a lot about AI. Specifically, when we first started, we wondered how these cars build their intelligence and could do things like identify the target to defeat its opponent exactly. We learned that, from a player perspective, defeating AI was really difficult, especially an AI car with a high level of intelligence. The higher level of AI allows the car to become smarter.

At that time, we thought that the intelligence was implemented inside the AI car. However, after some time testing, we see that the intelligence of an AI car is actually based on the way the engineer writes a code of application. Through technology and algorithms, each car knows where it is on the racetrack, and where the other car is on the racetrack is based on the infrared camera on the bottom of cars. After scanning code from the racetrack, the car relays the information back to the smartphone or tablet via Bluetooth. The engineer receives that information and, as a result, enhances the AI, allowing it to decide to use the most suitable weapons to attack the opponents based on their positions on the racetrack.

After we understood these factors, we developed strategies that included physical intervention to impact the way AI acts while testing. For example, when playing a game with AI at its high level, players (especially people with less experience in playing the game) couldn't win it if their cars drove in front of the AI car. Thus, we chose an AI car equipped with a forward-firing weapon. In the game, if the AI car was behind the player car, we picked the AI car up and put it in front of the player car. That way, we could defeat the AI more easily. This scenario helped with developing a comprehensive strategy.

In conclusion, an AI's intelligence depends on a human's programming, not its own self. As a result, a human can create testing methods for it based on the rules made by the programmers.

You used error guessing in the AI and IoT testing process. Can you expand on what that is, how you used it and how it helped?

Nguyen: Error guessing is a technique which is based on the experience of testers to guess the problematic areas of the application. We usually use this technique to identify where the team should focus when executing the testing to create an effective strategy and avoid wasting time on the stable areas. Based on the experience we had gained during four years in the project, we could understand what the AI did and how the system worked. So we were able to easily find out the weaknesses of the application, as well as AI, by our assumption and guessing. This helps us save a lot of time as we focus on the questionable areas.

Stochastic testing is one technique you used. Can you expand on what this is and how you used it, and why is this particularly helpful for an under-14-year-old demographic and as part of the IoT testing process?

Nguyen: Stochastic testing (sometimes called monkey testing or random testing) is a technique where a tester randomly tests applications to find problems. Most gaming audiences in our project are children who are under 14 years old. Thus, we often play the game as children to see if the application can handle cases in scenarios that would not happen often with adults. For example, we usually test cases where the children could potentially break an application in a common real-world scenario. Examples include tapping two buttons at the same time, tapping on a button multiple times, tapping on multiple buttons/links continuously, interrupting, tapping everything on screen to see what their functions are (they don't usually go through the tutorial), etc. All of these actions may cause an application to get stuck or crash.

We've talked about the IoT testing process, so what about exploratory testing and AI? Everyone is wondering how to get their hands around AI testing. Can you be specific about how you approached this and what you learned from it?

Nguyen: Testing AI is a challenging task since we didn't have much documentation on how Anki's AI was programmed. Thus, we had to discover and explore to get familiar with AI, as well as understand AI behavior. Then, while testing, we took notes and recorded the actual test, so when the bugs occurred, we checked our recording to find the cause of failure. We observed the whole context, environment, platform, device, the emotion on the robot's face, the robot's battery, the device's battery, as well as the game we played, and used the experience of game testing we had gained during the project to narrow the "bug zone."

For example, we're testing a robot that is introduced as an intelligent guy with a big mind. He has the ability to remember, be curious and explore and get to know people. He is almost like a human. Thus, we first focused testing on the robot since we thought the intelligence was based on him. However, we found the intelligence was actually on the device application (after getting familiar with AI and the application using this test type). Basically, the robot is a collection of lights, motors, sensors and firmware running on processors. Firmware has the duty to communicate with the application via the robot's Wi-Fi (the robot acts as a Wi-Fi access point) to store data persistently, run motors, etc. Thus, once the firmware changes, we often focus testing on communication between the app and robot instead of the robot's behaviors only.

How long did the IoT testing process take, and do you have an idea of how many tests total were run?

Nguyen: Actually, this is a difficult question to answer. There are no written test cases for the above test types since they are techniques that are based on the experience of each tester to test the system. However, one certainty is that we often write test cases for test types called functional testing and smoke testing in which we apply the test case design techniques, such as equivalent partitioning, boundary analysis, constraint analysis, state transition and condition combination, to design test cases. Until now, the total number of test cases for the three games we tested is around 8,000 test cases. We combined all type tests, such as exploratory testing/ad-hoc testing, error guessing, stochastic testing, functional testing and smoke testing, during the testing phase to make sure we had maximum test coverage.

We often hear about automating software testing. This seems counter to that argument because, for so much of this IoT testing process, you needed human testers and lots of them working together. Can you expand on those thoughts?

Nguyen: Test automation is using computer time to execute tests. There are a lot of benefits of test automation, such as running the automated tests unattended (overnight), reliable repetitive testing, increasing speed in test execution, improving quality and increasing test coverage, reducing costs and time for regression testing, execution of tests that can't be done manually, performance testing to assess software stability, concurrency, scalability, etc.

We cannot apply automation testing for AI since it is just useful for stable systems with written test cases. Whereas AI behaviors are very complicated and random, so AI testing is more suited for manual execution. However, in our current project for Anki, we applied automation testing, which helped the team free up time.

For example, one week after a release, we often clean up crash reports on Jira that no longer happen on the latest release. Crashes are errors generated and sent to [the] server when players get crashed while playing game. These crashes are logged on Jira automatically. We created a bot to query crashes that no longer happen on the latest release in Jira, then add [a] comment and close them. Thousands of bugs are closed that way after every release [each] week, which help[s] manual tester[s] free up time. Another example is we often run unit tests to check basic functions when we have a new build to make sure the build is testable. This also saves time for us since we will not have to wait until manual testing to see if any function is broken because this function check process is already covered by unit testing. One final example is we run daily regression tests for the website where products of client[s] are sold. The website is quite stable now, but we need to run regression testing everyday (overnight) to make sure the website is working well since sometimes a developer makes minor changes on the website. This saves a lot of time as a manual tester cannot execute thousands of test cases every day.

Next Steps

Using AI to help with software testing

Learn how AI benefits manufacturing

AI and the public cloud -- what you need to know

This was last published in October 2017

Dig Deeper on Software Testing Best Practices

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Good pointers on AI and IoT testing process. Informative read.
Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close