The inevitable costs of bad software

When designed poorly any software can pose risks to the public. Things like the loss of personally identifiable information or enlistment into a botnet for a distributed denial of service attack are just the tip of the proverbial iceberg. And there is an effective, if unsexy, solution on the horizon: shifting product liability for bad code. In today’s software landscape, there is essentially no liability for software failures. The standard ‘click-wrap’ contract that comes with any software package disclaims liability and shifts responsibility to the end-user – and for the most part courts have upheld those agreements. As long ago as 1986, a federal court held that Apple could not be sued for bugs in its software, having disclaimed liability and made no claim that the code was bug free. But where life and death are at issue, responsibility and liability cannot be far behind. We don’t know what the disaster that triggers a call for reform will look like: It could be in the medical field with a failure of life-saving technology, or in the automotive field when a self-driving car is hit with ransom-ware… Whatever it is though, there will be inevitable demands for liability systems. And if the industry is not proactive in its approach it may well face increasing regulatory intervention. In fact, we are already seeing movement in this realm: Since 2013, the Federal Trade Commission has successfully settled with several companies it has accused of failing to take reasonable steps to secure their products. Most recently, the FTC filed a complaint against D-Link Corporation for allegedly preventable vulnerabilities in its routers and internet...
Regression testing in a nutshell…

Regression testing in a nutshell…

Regression testing is a type of testing that is carried out to make sure that any code modifications to the software product didn’t result in any additional defects (side-effects) in the existing functionality that was working perfectly fine earlier. In other words – to identify the defects in related functionality that creep in after the code changes to an already tested functionality. Many organizations verify critical functionality once, and then assume it continues to work unless they intentionally modify it. However, even routine and minor code changes can have unexpected side effects that might break previously-verified functionality. The purpose of regression testing is to detect unexpected faults – especially those that occur because an inheriting developer did not fully understand the internal code correlations when modifying or extending existing code. Every time code is modified or used in a new environment, regression testing should be used to check the code’s integrity. Regression testing should be tightly linked to functional testing, and be built from the successful test cases developed and used in functional testing. These test cases, which verified an application’s behavior or functionality, are then rerun regularly as regression tests and become the means for verifying that the application continues to work correctly as new code is being added. During regression testing, specified test cases are run and current outcomes are compared to previously-recorded outcomes. This forms the basis of the reports that are used to illustrate software defects or deviation from...
Apple’s iOS11, new iPhones, coming this fall!

Apple’s iOS11, new iPhones, coming this fall!

Apple’s iOS11 is coming, and with it a plethora of new and exciting features – and some new and exciting iPhones and iPads too! Here at iBeta, we work with Apple as part of their developer program to ensure our testers (and in some cases our clients) get a chance to work with, and test against, the latest iOS builds weeks or even months before they are released. While pre-release and release-candidate builds are typically only applicable to very specific test cases, the opportunity for our testers to be familiar with the design language of the new releases, as well as knowing where all the buttons have moved to, allows for a much higher number of test hours and a more thorough test environment upon release. So, if your mobile app is slated for operation on iOS11, coming in the fall of 2017, and you need to ensure it plays well with the new design elements or any of the improved functionality – give us a call, we’re happy to...
Automation Engagement Flow

Automation Engagement Flow

Some time ago I wandered the building here at iBeta to speak with some of our Test Engineers, like the folks who work in our Automation, Performance, and Security verticals. This was for the purpose of getting them to write a bit about how they do what they do for the blog. Now, this was not easy; your typical capital ‘E’ Engineer doesn’t even speak the same language as everyone else, and they generally don’t care for the trivialities of ‘blogs‘ or ‘social media‘ either. So plying them with the promise of making them Internet Famous didn’t really work… Fortunately there is always pizza – the universal food group – and I was eventually successful in getting them to write a few things for us! So, without further adieu; “The Flow of an Automation Engagement“, by Joshua Kitchen An automation engagement has five primary phases: Infrastructure Scripting Batching Integration Maintenance There’s overlap from phase to phase and any particular phase doesn’t necessarily need to be “complete” to move to the next one, but this is the general flow. Infrastructure Infrastructure is all about laying the groundwork to enable a successful automation effort. Minor gains can be made on the part of single testers given automation tools, but to make fundamental changes to development culture it is necessary to lay out infrastructure. Infrastructure can be described in four categories: Tool assets Personnel assets Physical assets Software assets Tool assets are generally the first place organizations start in automation: buy or find a tool that meets the general anticipated needs, then hand it out to the test staff. The other three...
Samsung Galaxy S8 and S8+ enter mobile testing matrix

Samsung Galaxy S8 and S8+ enter mobile testing matrix

The highly anticipated Samsung Galaxy S8 and S8+ have arrived here at iBeta, and have entered service in our mobile testing lab. The Galaxy S8 offers a 5.8 inch AMOLED screen at 570 ppi, while the Galaxy S8+ has a larger 6.2 inch AMOLED screen at 529 ppi, both at an 18:9 aspect 1440×2960 pixel resolution. Both phones run Android 7.0 Nougat on an octa-core Snapdragon 835 with 4 GB of RAM and 64 GB of storage, have 12MP rear facing and 8MP front facing cameras, a fingerprint sensor, an iris scanner, and a heart rate sensor, and features both 80.11AC (with gigabit speeds supported) and Bluetooth 5.0 with Near Field Communications (NFC). We expect that the general need for these phones in test will be for the 18:9 aspect ratio: When displaying 16:9 content, the S8 and S8+ will display black bars to fill the unused space – and while the end-user can change this in the settings, it will effect the image quality. Given the prospective popularity of this phone we are expecting to see a lot of 18:9 content coming soon, and this will make our test phones very busy over the coming months. The S8 and S8+ have an optional accessory called Samsung DeX, which is mobile-to-PC transition tool that can turn the Galaxy S8 into a PC. Similar to Microsoft’s Continuum feature, the Samsung DeX tool will be compatible with the Galaxy S8 and the Galaxy S8+, and will offer users an Android-based desktop-like experience. Samsung Galaxy S8 users will have to plug the handset into the DeX Station, which will connect the handset to...