Skip to content
larsbergstrom edited this page Dec 7, 2014 · 1 revision

Servo + ateam discussion on web platform tests

  • What do we need to do in Servo to make them fast and reliable?
  • Can we make a first step on CSS ref tests?
  • How can we upstream things as smoothly as possible?
  • How can we track things over time

Issues Servo has?

  • jdm: Process of creating new tests. Can't submit a PR directly; have to do it around. Having a new-directory would be great.
  • jgraham: For gecko, depends on the tree-copy trickery. So, I list gecko patches since the last sync and upstream / merge them all automatically (trusting that the Gecko review process is good enough for upstreaming). Maybe do something similar for Servo? But Servo uses a submodule instead of a copy.
  • Simonsapin: Should we have a fork of the repo and do it that way?
  • jdm: We have a fork and a branch.
  • jgraham: One option is just to land stuff there and then upstream from there. Just copy it over.
  • jack: But then it's two PRs. Land in the fork, then update the sumodule pointer, etc.
  • manish: And pulling updates from wpt is hard...
  • jgraham: Could script it so that we always upload it.
  • Simonsapin: Two PRs is already teh plan for all our dependencies.
  • jgraham: Could also just copy the code instead of a git submodule.
  • larsberg: We don't need to keep the git submodule, since it's the only one we had. Can we just copy?
  • jdm: But we have a fork for some nontrivial stuff (resources).
  • jgraham: Maybe we could special-case the resources. Just copy everything but the resources. Just testharness.js and related files. Hopefully that will go away, but I don't know how long it will be or what we're missing.
  • jdm: .style is missing from Servo.
  • jgraham: And do you have iframes? That's the other part...
  • jdm: Have them, but no load events. Is that important?
  • jgraham: Just need window.open & window.parent...
  • jdm: Nope.
  • jgraham: Maybe window.parent is always null and window.open is always self and things would "just work" enough to make testharness.js Servo-compatible.
  • jdm: So, the direction is to get as close to Gecko as possible so we can use the same mechanisms.
  • larsberg: Can you trust the Servo reviews as well as Gecko?
  • jgraham: Yes. if things go south, we'll change the policy.

Making the wpt stuff faster

  • manish: What do we need to be able to not destroy the browser between every test?
  • jgraham: Create & control the browser remotely. WPT needs to be able to load a page (e.g., .navigate) and then execute a script in the context of that page. In Gecko, the marionette stuff is based on top of the devtools implementation.
  • jack: In a different tab? Process?
  • jdm: No. Just run a server that can communicate with a driver that executes the tests.
  • jack: So we could just make a miniservo / port that implements what it needs?
  • jgraham: Yes. Just a wire protocol, a socket, etc.
  • ato: In Gecko, we just build on top of dev tools. We have some chrome-side and content-side stuff to execute the script. As long as you can start a new session, navigate to a page, and close the session and execute some random JS in the context of that, it should be fine.
  • manish: All we need to be able to do is destroy the previous state (to avoid poisoning the cache).
  • jgraham: That's also a problem in Gecko. There, we close the tab after each test. Makes it more stable that way. There's no reason is has to, but it's assumed that things will be more stable if it does. Other thing you need is to return the results. So, you have to be able to serialize JSON objects. The full WebDriver is more complicated, but not for this.
  • ato: We have the algorithms for the serialization. They're horrible, but they're there for Gecko!
  • jdm: Written in JS?
  • ato: Yes. I'm afraid so. But for the basic use case, it should just be fine with JSON.
  • manish: Does that work with prallelism, too?
  • jgraham: In gecko, we open a separate copy on a separate port. If you can do that, it'll work.
  • ato: Slightly more complicated... because of profiles.

CSS Ref tests

  • larsberg: I'd like to make some progress around them.
  • jgraham: There's a large manual test suite for CSS 2.1. Everybody realized that's a disaster. Since then, there's been a requirement that all CSS tests are self-describing ref tests or testharness.js tests, where possible. So, same as WPT. So, all of the CSS3 stuff should be reftests. But, there are still 4k manual tests that we should convert "sometime" but it's not really happening.
  • simonsapin: Girard Talbot has been doing a lot on them. The last snapshot moved closer to 40%.
  • jdm: Different than testharness reftests?
  • jgraham: The CSS reftests are almost but not quite compatible with WPT reftests. Two problems. For many, it will just work. For some of them, there's a difference in the semantics between CSS and WPT around if you have a test that should match multiple things - how do you match it? Especially "thing1 or thing2" type tests. Basically, we just need to implement that in WPT. It's complicated, but we should just use theirs. The third problem is that the tests require a build step, since it's an XHTML file. So to get the test, you have to run a build system and I don't know what it requires / does.
  • gw: I did a full checkout and got it to do a full build. The dependencies are pretty wacky.
  • jgraham: Is it self-contained, or does it read from a database or something?
  • gw: It seemed self-contained.
  • jgraham: There's stuff in the scripts that touches their databases (css-test-helper-thing?).
  • gw: I just did enough to get the HTML files that we could run.
  • jgraham: Not sure what fraction of the tests need a build step vs. just work.
  • simonsapin: We don't have an XML parser.
  • gw: About half are XHT.
  • larsberg: so what do we do? ideally would be like WPT and we could upstream tests. Who can we talk with about making this better?
  • jgraham: Peter linss, fantasai. Without just doing the work, it's hard to have the conversation. If we pull their stuff into WPT, they would want their infrastructure to keep working. It's hard to make that work without knowing how their stuff works. It's about not regressing their toolstack.
  • larsberg: What would you like us to do? We need CSS ref tests.
  • jgraham: If we had a simple way of taking a checkout of the CSS ref tests and running them inside of WPT, then we could run them. Wouldn't fix contribution story, because still a separate repo, but we could at least pull from upstream, and maybe work without how their upstreaming process works...
  • SimonSapin: Two ways to upstream: new directory in Mercurial, or test the web forward stuff, which is you create a PR against a GitHub repo and it'll work.
  • jgraham: Still too much friction where if you want to sync you just merge your changes upstream first. I think they still have rules that you can't have the same vendor who wrote the test review the test. Intermediate is just if we could pull their GitHub repo down, copy it into the tree as we do with the WPT, and have a step that runs their build step, that would give us something we could run. Then we could talk about how to upstream stuff and try to make that smoother.

Web platform ref tests

  • jdm: What do we need to do in Servo?
  • jgraham: Not much. Implement a ServoRefTestExecutor...
  • manish: We have one. We get PNGs out of Servo.
  • ato: Just expose it to WPT.
  • jgraham: I think that's a very small piece of work.
  • manish: It's just a flag.
  • jgraham: It only hasn' thappened because we didn't start doing it.
  • simonsapin: What do we do now?
  • jgraham: Skip the ref tests. Later, we may need to support async ref tests (Wait). Basically, you can annotate a class with Wait, and then Gecko won't take a screenshot until the class goes away.
  • ato: In Gecko, we create a canvas and paint to the canvas then push it across the wire.
  • jgraham: That architecture is just because it's how things were easy to put together in Gecko.

Tracking progress over time

  • jgraham: Yeah, this is the thing we have. There's a report thing in Github somewhere that says you passed some / failed some. At some point, I did one that worked for more than one browser.
  • manish: We could edit the python - output structured numbers instead.
  • jgraham: It'd be nice to have a page that tracked it for multiple browser.
  • larsberg: areweplatformyyet.com
  • jgraham: The goal for this, always, has been so that you could check each browser / platform feature to see what is supported. We'd like to avoid tying our reporting to a Mozilla-specific bit of infrastructure.
  • manish: one issue might be that we split the WPT tests across two machines. Can we merge them?
  • jgraham: There are some assumptions, so you need to get rid of the SuiteStart/SuiteEnd stuff. But there's a bug open to fix that requirement.
  • manish: We'd need to pass it between the servers; one FTP thing. We could also put the files somewhere and then have the doc build upload them.
  • jgraham: you can upload things to treeherder, too. Then you could do that...
  • ato: Gaia and B2G are doing that right now.
  • manish: Whom do we talk to?
  • jgraham: Ed Morley or Mauro. It's not documented, but you can find existing code from Gaia.
Clone this wiki locally