Explaining how the real world works!     RSS Feed The Automated Tester on Twitter The Automated Tester on LinkedIn The AutomatedTester on github

Bugsy 0.4.0 - More search!

Sat 06 Dec 2014

I have just released the latest version of Bugsy. This allows you to search bugs via change history fields and within a certain timeframe. This allows you to find things like bugs created within the last week, like below.

I have updated thedocumentation to get you started.


>>> bugs = bugzilla.search_for\
                    .keywords('intermittent-failure')\
                    .change_history_fields(["[Bug creation]"])\
                    .timeframe("2014-12-01", "2014-12-05")\
                    .search()

You can see the Changelog for more details..

Please raise issues on GitHub

    Area: blog

WebDriver Face To Face - TPAC 2014

Tue 04 Nov 2014

Last week was the 2014 W3C TPAC. For those that don't know, TPAC is a conference where a number of W3C working groups get together in the same venue. This allows for a great amount of discussions between groups and also allows people to see what is coming in the future of the web.

The WebDriver Working Group was part of TPAC this year, like previous years, and there was some really great discussions.

The main topics that were discussed were:

  • We are going to be moving more discussions to the mailing list. This is to prevent people waiting for a Face to face to discuss things
  • The Data model of how data is sent over the wire between the remote and local ends
  • Attributes versus properties -This old chestnut
  • An approach to moving some of the manual tests that are used for W3C specs to automated ones with WebDriver - This is exciting

The meeting minutes for Thursday and Friday

    Area: blog

WebDriver F2F - London 2014

Mon 14 Jul 2014

Last week saw the latest face to face of the WebDriver Working Group held at Facebook. This meeting was important as this is hopefully the last face to face before we go to Last call allowing us to concentrate on issues that come up during last call.

This meeting was really useful as we were a number of discussions around the prose of the spec when it comes to conformance and usability of the spec, especially when given to implementors who have never worked on WebDriver.

The Agenda from the meeting can be found here

The notable items that were discussed are:

  • Merge getLocation and getSize to single call called getElementRect. This has been implemented in FirefoxDriver already
  • Describe restrictions around localhost in security section
  • How the conformance test will look (Microsoft have a huge raft tests they are cleaning up and getting ready to upstream!)
  • Actions has been tweaked from the original straw man delivered by Mozilla, hopefully see the new version in the next few weeks.

To read what was discussed you can see the notes for Monday and Tuesday.

    Area: blog

Bugsy 0.3.0 - Comments!

Mon 14 Jul 2014

I have just released the latest version of Bugsy. This allows you to get comments from Bugzilla and add new comments too. This API is still experimental so please send back some feedback since I may change it to real world usage.

I have updated the documentation to get you started.


>>> comments = bug.get_comments()
>>> comment[0].text
"I <3 Cheese"
>>> bug.add_comment("And I love bacon")

You can see the Changelog for more details.

Please raise issues on GitHub

    Area: blog

Bugsy 0.2 - Now with 100% more search!

Thu 26 Jun 2014

I have updated Bugsy to now have the ability to search Bugzilla in a meaningful way. I have updated the documentation to get you started.

For example to search Bugzilla you would do


>>> bugs = bugzilla.search_for\
...                .keywords("checkin-needed")\
...                .include_fields("flags")\
...                .search()

You can see the Changelog for more details..

Please raise issues on GitHub

    Area: blog

Introducing Bugsy - Client Library for interacting with Bugzilla

Mon 16 Jun 2014

I have created a library for interacting with Bugzilla using the native REST API. Bugsy allows you to get bugs from Bugzilla, change what you need to and then post it back to Bugzilla. I have created documentation to get you started.

For example to get a bug you would do

import bugsy
bugzilla = bugsy.Bugsy()
bug = bugzilla.get(123456)

and then to put it back, or if there is no bug ID (like if you were creating it) then you would do

import bugsy
bug = bugsy.Bug()
bug.summary = "I really realy love cheese"
bug.add_comment("and I really want sausages with it!")
bugzilla = bugsy.Bugsy("username", "password")
bugzilla.put(bug)
bug.id #returns the bug id from Bugzilla

Searching Bugzilla is not currently supported but will definitely be there for the next version.

Please raise issues on GitHub

    Area: blog

My Ideal build, test and land world

Thu 05 Jun 2014

The other week I tweeted I was noticing for that day we had 1 revert push to Mozilla Inbound in every 10 pushes. For those that don't know, Mozilla Inbound is the most active integration repository that Firefox code lands in. A push can contain a number of commits depending on the bug or if a sheriff is handling checkin-needed bugs.

This tweet got replies like, and I am paraphrasing, "That's not too bad", "I expected it to be worse". Personally I think this is awful rate. Why? On that day, only 80% of pushes were code changes to the tree. The bad push and the revert leads to no changes to the tree but still uses our build and test infrastructure. This mean that at best we can(on that day) only be 80% efficient. So how can we fix this?

Note: A lot of this work is already in hand but I want to document where I wish them to go. A lot of the issues are really paper cuts but it can be death by 1000 paper cuts.

Building

Mach, the current CLI dispatch tool at Mozilla, passes the build detail to the build scripts. It is a great tool if you haven't been using it yet. The work the build peers have done with this is pretty amazing. However I do wish the build targets passed to Mach and then executed were aligned with the way that Chromium, Facebook, Twitter and Google build targets worked.

For example, working on Marionette, if I want run marionette tests I would do |./mach //testing/marionette:test| instead of the current |./mach marionette-test| . By passing in the directory we are declaring what we want explicitly to be built and test.. The moz.build file should have a dependency saying that we need a firefox binary (or apk or b2g). The test task in the moz.build in testing/marionette folder would pick runtests.py and then pass in the necessary arguments ideally based on items in the MozConfig. Knowing the relevant arguments based on the build is hard work involving looking at your history or at a wiki.

Working on something where it has unit tests and mochitests or xpcshell tests? It's simple to just change the task. E.g. ./mach //testing/marionette:test changes quickly to //testing/marionette:xpcshell. Again, not worrying about arguments when we can create sane defaults based on what we just built. I have used testing in my examples because it is simple to show different build targets on the same call.

The other reason declaring the path (and mentioning the dependencies in the same manner) is that if you call |./mach //testing/marionette:test| after updating your repo it will do an incremental build (or a clobber if needed) without you knowing you needed it. Manually clearing things or running builds just to run tests is just busy work again.

Reviews and Precommit builds/tests

Want a review? You currently either have to use bzexport or manually create a diff and upload it to the bug and set the reviewer. The Bugzilla team are working to stand up review board that would allow us to upload patches and has a gives us a better review tool.

The missing pieces for me are: 1)We have to manually pick a reviewer and 2) that we don't have a pre commit build and test step.

1) I have been using Opera's Critic for reviewing Web Platform Tests. Having the ability to assign people to review changes for a directory means that reviewing is everyones responsibility. Currently Bugzilla allows you to pick a reviewer based on the component that the bug is on. Sometimes a patch may span other areas and you then need to figure out who a reviewer should be. I think that we can do better here.

As for 2) don't necessarily need to do everything but the equivalent of a T-Style run would suffice I'm my opinion. We could even work to pair this down more to be literally a handful of tests that regularly catch bugs or make it run tests based on where the patch was landing.

Why does this matter, we have try that people can use and "my code works" and "it was reviewed, it will compile". Mozilla Inbound was closed for a total of 2 days (48+ hours) in April and 1 day (24hrs) in May. At the moment I only have the data on why the tree was closed, not the individual bugs that caused the failure, but a pre commit step would definitely limit the damage. The pre commit step might also catch some of the test failures (if we had test suites we could agree on for being the smoke test suite) which had Mozilla Inbound closed for over 2 1/2 (61+ Hours) days in April and over 3 days (72+ hours) in May.

Landing code

Once the review has passed we still have to manually push the code or set a keyword in the bug (checkin-needed) so that the sheriffs can land it. This manual step is just busy work really that isn't really needed. If something has a r+ then ideally we should be queueing this up to be landed. This is minor compared to manual step required to update the bug with the SHA when it has landed, when it is landed we should be updating the bug accordingly. It's not really that hard to do.

Unneeded manual steps have an impact on engineering productivity which have a financial cost that could be avoided.

I think the main reason why a lot of these issues have never been surfaced is there is not enough data to show the issues. I have created a dashboard, only has the items I care about currently but could easily be expanded if people wanted to see other bits of information. The way we can solve the issues above is being able to show the issues.

    Area: blog

Wanting to do some open source work but not sure what this weekend?

Thu 01 May 2014

The Automation and Tools team at Mozilla has been working tirelessly to find some bugs that everyone can work on if you are stuck at home with rain. Our collection of good first bugs has been curated and has a number of really great mentors that can help get you started on the path to submitting a patch.

Wondering if it is worthwhile? My post last week asking for people to help on Marionette Good First Bugs has had 2 patches landed in Mozilla-Central and people looking at another 3 bugs. The list only had 9 bugs so that is more than half that are being picked up.

The best way to do things is look at our New Contributor page and get all the necessary things setup. Unfortunately some items might take a little work but its just something you need to do once.

Want to work on some bugs? We have a great list that you can choose from.

I look forward to seeing some great patches from you all!

    Area: blog

Do you trust a test that you have never seen fail?

Mon 28 Apr 2014

Recently David Heinemeier Hansson (dhh) wrote a blog post called "TDD is dead, long live testing". He describes how the TDD world has got mean spirited and perhaps the use of the technique was to break down the barriers of automated testing and regression testing but that is no longer the case. (I agree with this a little but there are a lot of angry people out there)

He then declares that he has had enough and declares that he does not write tests first and is proud of it.

This post has obviously had mixed reviews from people. A lot of the consultant types that I follow on twitter who are rather large advocates of TDD have said dhh shouldn't really be saying things like this. Their arguments have been around how hard it is to test rails applications because the underlying architecture doesn't allow it (Which distracts from the argument in my opinion!). The thing that David describes briefly is that we should not follow things dogmatically which is sound advice that everyone should follow. It's the same with best practises, don't follow them unless they make sense.

The amount of people who then started shouting from the roof tops that they don't test first and were proud of it grew quite quickly. I find this sentiment quite worrying. Why?

How do you trust a test that you have never seen fail?

If you have hundreds of thousands of tests that are run when you commit code you want there to be a very high probability that any regression will be caught. Writing tests after you have written the code can lead to a lot of tests that may never fail which take up huge amounts of resource that costs money. At the beginning of the year Mozilla would use ~200 computing hours per push to Mozilla Inbound (the main tree that holds the Firefox code). Thats A LOT of resource to be wasting on a test that you are not sure will ever catch anything. For what it is worth there a lot of tests in the Mozilla tree that may never catch anything but its hard to find them :(.

I know that you can't always write a test first, there are times where it is quite hard to do that, but making sure you have faith in your tests so that they actually do what you expect is the most important thing you can do.

dhh does say that not everything can be unit tested and I think this is the crux of his issue. He is, and a lot of other people, are getting hung up on labels for tests in my opinion. This then puts them off writing tests first. I have been a big fan of the way that Google labels their tests. This removes the dogmatic beliefs in TDD and describes them in how much work they are doing. This is the way we should be doing it!!!

So if you are going to not write a test first make sure that you are writing tests that you can trust and that your colleagues can trust too!

    Area: blog

Marionette Good First Bugs - We need you!

Fri 25 Apr 2014

I have been triaging Marionette ( The Mozilla implementation of WebDriver built into Gecko) bugs for the last few weeks. My goal is to drive the project, barring any unforeseen firedrills that my team may need to attend, to getting Marionette released.

While I have been triaging I have also been finding bugs that would be good for first time contributors to get stuck into the code and help me get this done.

So, if you want to do a bit of coding and help get this done please look at our list

I will be adding to it constantly so feel free to check back regularly!

    Area: blog