Apologies for spam if you are already aware of this, but I have proposed
two workshops for this years' hackathons, write the first
browsertests/Selenium test[0] and Fix broken browsertests/Selenium Jenkins
jobs[1].
Željko
--
0:
On Tue, Apr 14, 2015 at 8:55 PM, Tomasz Finc <tfinc(a)wikimedia.org> wrote:
CC'ing Stephane from the Collaboration team
who's keenly interested in
this as well.
--tomasz
On Fri, Apr 3, 2015 at 2:30 AM, Joaquin Oltra Hernandez
<jhernandez(a)wikimedia.org> wrote:
Personally, I love them, but more often than not
when I run them a few
things happen that make me sad:
Tests take too long to run
Even on Gather, which doesn't have too many tests, it takes a really long
time, which discourages running all tests every time.
Tests don't block jenkins merges
This would be key for me. Do this. Block jenkins. Then we'll be forced to
make it faster, make better browser tests, and everybody will have to
care
because they will run and block merges.
So there's a few key items that I would love to see:
Improve performance (needs magnitudes improvements, a lot of work on
different fronts)
Fully support headless browsers like phantom (they randomly break with
timeout errors and other problems, but are the faster/painless ways of
running the tests)
Run the browser tests (or a smoke subset) on jenkins patches as a voting
job. Crucial for making everybody care about the tests and to stop
regressions.
On Thu, Apr 2, 2015 at 10:18 PM, Jon Robson <jdlrobson(a)gmail.com> wrote:
>
> Personally, I think one team needs to get this completely right and
> demonstrate the difference e.g. fewer bugs, iterating fast, quicker
> code review time etc..
>
> Talks can come off the back of that.
> The majority of people I speak to seem to advocate a TDD approach but
> I think we don't make life easy enough for them to do that and we lack
> the discipline to do that. We need to work on both of those two.
>
> I'm confident if we do a survey we'll identify and prioritise the pain
> points. For me my top priority would be getting the infrastructure in
> place on all our existing codebases in a consistent way that make
> adding tests effortless and prevent code merging when it breaks those
> tests but there may be more important things we need to sort out
> first!
>
>
> On Tue, Mar 31, 2015 at 1:16 AM, Sam Smith <samsmith(a)wikimedia.org>
wrote:
> > Dan, Jon,
> >
> > I got caught up in meetings yesterday – you'll see this email a lot
> > during
> > Q4 ;) – so I delayed sending this email, so forgive the repetitions of
> > some
> > of Dan's points/questions:
> >
> >> Here are a few ways I can think of:
> >>
> >> include feedback on browser tests – or lack thereof – during code
> >> review
> >>
> >> make browser test failures even more visible than they currently are
–
> >> but
> >> maybe not the success reports, eh?
> >>
> >> can these reports be made to point at a bunch of candidate changes
that
> >> may have broken 'em?
> >>
> >> hold a browser-test-athon with the team and any volunteers at the
> >> {Lyon,Wikimania} hackathon
> >>
> >> make it trivial to run 'em, if it isn't already
> >
> > From what little experience I have of trying to establish team
> > practices,
> > I'd say that it's best to advocate for <practice> and demonstrate
its
> > value*, rather than criticise. I'd love to see you funnel your passion
> > for
> > browser testing into a talk or series of talks for the mobile team –
the
> > org, maybe? – or maybe you've got
some recommended reading or talks
> > you'd
> > like to share that'll inspire.
> >
> > –Sam
> >
> > * If you'd like to hear my opinions about browser testing, then insert
> > one
> > beer and wind me up a little
> >
> >
> > On Mon, Mar 30, 2015 at 8:47 PM, Dan Duvall <dduvall(a)wikimedia.org>
> > wrote:
> >>
> >>
https://phabricator.wikimedia.org/T94472
> >>
> >> On Mon, Mar 30, 2015 at 12:39 PM, Dan Duvall <dduvall(a)wikimedia.org>
> >> wrote:
> >> > On Mon, Mar 30, 2015 at 10:30 AM, Jon Robson
<jdlrobson(a)gmail.com>
> >> > wrote:
> >> >> It really saddens me how very few engineers seem to care about
> >> >> browser
> >> >> tests. Our browser tests are failing all over the place. I just
saw
> >> >> this bug
> >> >> [1] which has been sitting around for ages and denying us green
> >> >> tests
> >> >> in
> >> >> Echo one of our most important features.
> >> >>
> >> >> How can we change this anti-pattern?
> >> >
> >> > That's exactly what I'd like to explore with you and other
like
> >> > minds.
> >> >
> >> >> Dan Duval, would it make sense to do a survey as you did with
> >> >> Vagrant
> >> >> to
> >> >> understand how our developers think of these? Such as who owns
> >> >> them...
> >> >> who
> >> >> is responsible for a test failing... who writes them... who
doesn't
> >> >> understand them.. why they
don't understand them etc...?
> >> >
> >> > Great idea! I suspect that the number of false positives in a given
> >> > repo's test suite is inversely related to the number of developers
on
> >> > the team actually writing
tests, and the affordance by managers to
do
> >> > so. If you're not regularly
writing tests, you're probably not
going
> >> > to feel comfortable
troubleshooting and refactoring someone else's.
> >> > If
> >> > TDD isn't factored in to your team's velocity, you may feel
like
the
> >> > investment in writing tests (or
learning to write them) isn't worth
> >> > it
> >> > or comes at the risk of missing deadlines.
> >> >
> >> > A survey could definitely help us to verify (or disprove) these
> >> > relationships.
> >> >
> >> > Some other questions I can think of:
> >> >
> >> > - How valuable are unit tests to the health/quality of a software
> >> > project?
> >> > - How valuable are browser tests to the health/quality of a
software
> >> > project?
> >> > - How much experience do you have with TDD?
> >> > - Would you like more time to learn or practice TDD?
> >> > - How often do you write tests when developing a new feature?
> >> > - What kinds of test? (% of unit test vs. browser test)
> >> > - How often do you write tests to verify a bugfix?
> >> > - What kinds of test? (% of unit test vs. browser test)
> >> > - When would you typically write a unit test? (before
> >> > implementation,
> >> > after implementation, when stuff breaks)
> >> > - When would you typically write a browser test? (during
conception,
> >> > before implementation, after
implementation, when stuff breaks)
> >> > - What are the largest barriers to writing/running unit tests?
(test
> > framework, documentation/examples, execution
time, CI, structure of
> > my
> > code, structure of code I depend on)
> > - What are the largest barriers to writing/running browser tests?
> > (test framework, documentation/examples, execution time, CI)
> > - What are the largest barriers to debugging test failure? (test
> > framework, confusing errors/stack traces, documentation/examples,
> > debugging tools)
> >
> > I'll create a Phab task to track it. :)
> >
> > --
> > Dan Duvall
> > Automation Engineer
> > Wikimedia Foundation
>
>
>
> --
> Dan Duvall
> Automation Engineer
> Wikimedia Foundation
>
> _______________________________________________
> Mobile-l mailing list
> Mobile-l(a)lists.wikimedia.org
>
https://lists.wikimedia.org/mailman/listinfo/mobile-l
--
Jon Robson
*
http://jonrobson.me.uk
*
https://www.facebook.com/jonrobson
* @rakugojon
_______________________________________________
Mobile-l mailing list
Mobile-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mobile-l
_______________________________________________
Mobile-l mailing list
Mobile-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mobile-l
_______________________________________________
QA mailing list
QA(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/qa