Hi there. The next two weeks is where we take the technical topics from the TSC and the discussions had during the summit and turn them into the concrete engineering blueprints for this cycle. I've created a page at: https://wiki.linaro.org/MichaelHope/Sandbox/1111Blueprints
listing all of the TRs. Could you please have a look through these, find any with your name on them, and fill in the wiki page. I've put more notes on the page itself. Some of the topics may warrant specifications.
Let me know if you have questions on what the topics actually mean.
-- Michael
Michael Hope michael.hope@linaro.org wrote:
Hi there. The next two weeks is where we take the technical topics from the TSC and the discussions had during the summit and turn them into the concrete engineering blueprints for this cycle. I've created a page at: https://wiki.linaro.org/MichaelHope/Sandbox/1111Blueprints
listing all of the TRs. Could you please have a look through these, find any with your name on them, and fill in the wiki page. I've put more notes on the page itself. Some of the topics may warrant specifications.
Let me know if you have questions on what the topics actually mean.
In the past cycle, I've been using the feature to attach bugs as work items to a blueprint, and I had been planning to do so again for at least some blueprints this cycle. However, this means that work items are added and disappear as bugs are discovered and possibly resolved as invalid. This seems to conflict with the goal that this cycle, work item numbers should stay stable ...
Any thoughts on how we should handle this?
Also, some of the feedback from UDS sessions included features that could arguably be considered part of our blueprints, but go beyond what was originally their scope. For example, one user asked for GDB tracepoints to be also supported with native debugging, and one asked for enhancements to cross-debugging the kernel via KGDB.
At this point it is not clear whether there is anything we can do about those requirements during this cycle, but I think we should keep track of them to make sure they're not forgotten. I could think of a number of ways to do so:
- Add work items to the current blueprints (and postpone them if we cannot in the end implement them)
- Do more investigation work first, and then add work items later if appropriate
- Don't add them at all to the existing blueprints, but queue up new blueprints (possibly for the next cycle)
Thoughts?
Mit freundlichen Gruessen / Best Regards
Ulrich Weigand
-- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294
On 19 May 2011 10:43, Ulrich Weigand Ulrich.Weigand@de.ibm.com wrote:
Also, some of the feedback from UDS sessions included features that could arguably be considered part of our blueprints, but go beyond what was originally their scope. For example, one user asked for GDB tracepoints to be also supported with native debugging, and one asked for enhancements to cross-debugging the kernel via KGDB.
At this point it is not clear whether there is anything we can do about those requirements during this cycle, but I think we should keep track of them to make sure they're not forgotten.
My feeling is that these should be new blueprints (or proto-blueprints on a wiki page) since they're really separate features. What I found last cycle was that I had a few mega-blueprints which just accumulated new work items across the cycle and then at the end had a number of things postponed. This time round I'm trying for much more tightly focused blueprints so it's clearer that some features are finished and some are not (and that some features are high priority and some are more wishlist).
I haven't yet figured out how or if bugs should fit into the blueprint and work item setup.
-- PMM
Peter Maydell peter.maydell@linaro.org wrote on 05/19/2011 12:16:58 PM:
On 19 May 2011 10:43, Ulrich Weigand Ulrich.Weigand@de.ibm.com wrote:
Also, some of the feedback from UDS sessions included features that could arguably be considered part of our blueprints, but go beyond what was originally their scope. For example, one user asked for GDB tracepoints to be also supported with native debugging, and one asked for enhancements to cross-debugging the kernel via KGDB.
At this point it is not clear whether there is anything we can do about those requirements during this cycle, but I think we should keep track of them to make sure they're not forgotten.
My feeling is that these should be new blueprints (or proto-blueprints on a wiki page) since they're really separate features. What I found last cycle was that I had a few mega-blueprints which just accumulated new work items across the cycle and then at the end had a number of things postponed. This time round I'm trying for much more tightly focused blueprints so it's clearer that some features are finished and some are not (and that some features are high priority and some are more wishlist).
OK, I guess that makes sense. For now I've left those things in the whiteboard of the existing blueprints, but *not* as work times, just as comments. They could be extracted to new blueprints if that's what we decide to do ...
Mit freundlichen Gruessen / Best Regards
Ulrich Weigand
-- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294
On Thu, May 19, 2011 at 9:43 PM, Ulrich Weigand Ulrich.Weigand@de.ibm.com wrote:
Michael Hope michael.hope@linaro.org wrote:
Hi there. The next two weeks is where we take the technical topics from the TSC and the discussions had during the summit and turn them into the concrete engineering blueprints for this cycle. I've created a page at: https://wiki.linaro.org/MichaelHope/Sandbox/1111Blueprints
listing all of the TRs. Could you please have a look through these, find any with your name on them, and fill in the wiki page. I've put more notes on the page itself. Some of the topics may warrant specifications.
Let me know if you have questions on what the topics actually mean.
In the past cycle, I've been using the feature to attach bugs as work items to a blueprint, and I had been planning to do so again for at least some blueprints this cycle. However, this means that work items are added and disappear as bugs are discovered and possibly resolved as invalid. This seems to conflict with the goal that this cycle, work item numbers should stay stable ...
Any thoughts on how we should handle this?
Peter and I talked about similar yesterday. My current thoughts are: * For tracking the release cost, have a 'Linaro GDB the product' blueprint with a work item per release/month * For tracking support, don't attempt to. Use the ticket system and reserve, say, a 15 % of your time for bug fixes * For tracking work with a big, known backlog such as the GDB testsuite failures: use workitems or bugs attached to a blueprint
It would be nice to have some metrics on the bugs such as reporting rate, retirement rate, backlog, % invalid, and so on. I'll ask the Infrastructure team.
Also, some of the feedback from UDS sessions included features that could arguably be considered part of our blueprints, but go beyond what was originally their scope. For example, one user asked for GDB tracepoints to be also supported with native debugging, and one asked for enhancements to cross-debugging the kernel via KGDB.
At this point it is not clear whether there is anything we can do about those requirements during this cycle, but I think we should keep track of them to make sure they're not forgotten. I could think of a number of ways to do so:
- Add work items to the current blueprints (and postpone them if
we cannot in the end implement them)
- Do more investigation work first, and then add work items later
if appropriate
- Don't add them at all to the existing blueprints, but queue up
new blueprints (possibly for the next cycle)
I'd like to do them as a backlog. Most of these new features are interesting to upstream, so we should either record them upstream somehow or in a wiki page. I'm not fond of blueprints as they're too hard to find and manipulate.
-- Michael
I've added some ideas to the NEON blueprint. There are now really 6 separate tasks, broken down into subitems, so it looks like we really could have 6 separate blueprints, as you mentioned on the wiki page. I wasn't sure how to create those blueprints correctly though. Please let me know if they don't look sensible!
Another one that would be interesting is the missed SMS opportunity exposed by Jim Huang's NEON intrinsic example from a while back. If we have a loop such as:
for (int i = 0; i < n; i++) { unsigned short foo = a[i]; ... a[i] = ...; }
then SMS treats the read from a[i + 1] as having a true dependency on a[i], preventing any useful cross-iteration scheduling.
Is that already on our radar? If not, could it be treated as another NEON work item? Like my auto inc/dec suggestion in the blueprint, it's really a generic improvement. However, like the inc/dec thing, I expect it's going to affect NEON more than core code.
Richard
Hello Richard,
Another one that would be interesting is the missed SMS opportunity exposed by Jim Huang's NEON intrinsic example from a while back. If we have a loop such as:
for (int i = 0; i < n; i++) { unsigned short foo = a[i]; ... a[i] = ...; }
then SMS treats the read from a[i + 1] as having a true dependency on a
[i],
preventing any useful cross-iteration scheduling.
That's indeed a long standing issue which suppresses SMS and resolving it would be great! The last attempt to address it iinm was done by the ispras guys for ia64 where they tried to propagate data-dependence information from trees to RTL: http://gcc.gnu.org/ml/gcc/2007-12/msg00240.html That patch is quite old and not in mainline.
Thanks, Revital
On Fri, May 20, 2011 at 2:07 AM, Richard Sandiford richard.sandiford@linaro.org wrote:
I've added some ideas to the NEON blueprint. There are now really 6 separate tasks, broken down into subitems, so it looks like we really could have 6 separate blueprints, as you mentioned on the wiki page. I wasn't sure how to create those blueprints correctly though. Please let me know if they don't look sensible!
Let's collect things first. Providing the topics have sufficient meat in them then I'll split them into blueprints later.
Hmm. The whole 'Do topic A; Commit upstream; Commit in Linaro' work item repetition is unfortunate. It's correct but it hides the topics in the noise.
How about also 'Ensure vectorised code doesn't regress over non-vectorised code'? The goal would be for 90 % of benchmarks to not regress and 99 % to regress no more than 2 %. At the moment good 'ol CoreMark is worse with -O3 -omfpu=neon...
-- Michael
Michael Hope michael.hope@linaro.org writes:
On Fri, May 20, 2011 at 2:07 AM, Richard Sandiford richard.sandiford@linaro.org wrote:
I've added some ideas to the NEON blueprint. There are now really 6 separate tasks, broken down into subitems, so it looks like we really could have 6 separate blueprints, as you mentioned on the wiki page. I wasn't sure how to create those blueprints correctly though. Please let me know if they don't look sensible!
Let's collect things first. Providing the topics have sufficient meat in them then I'll split them into blueprints later.
Hmm. The whole 'Do topic A; Commit upstream; Commit in Linaro' work item repetition is unfortunate. It's correct but it hides the topics in the noise.
Yeah. I suppose one advantage of splitting the blueprint up might be that each "real" task becomes more obvious.
How about also 'Ensure vectorised code doesn't regress over non-vectorised code'? The goal would be for 90 % of benchmarks to not regress and 99 % to regress no more than 2 %. At the moment good 'ol CoreMark is worse with -O3 -omfpu=neon...
Well, I suppose if we're setting figures like that, then it's really "Limit regressions in vectorised code over non-vectorised code". :-) But maybe it'd be better to keep figures out of it. 99% is awkward because I don't think we even have 100 benchmarks yet. And what about benchmarks like DENbench that run the same code more than once, but with a different data set? Does each data set count as a separate benchmark?
Maybe 'Deal with regressions in vectorised code over non-vectorised code.', if that isn't too wishy-washy? With the usual "commit upstream" and "commit to Linaro 4.6" too, of course.
FWIW, all the examples I've seen so far are due to the over-promotion of vector operations (e.g. doing things on ints when shorts would do).
Richard
Well, I suppose if we're setting figures like that, then it's really "Limit regressions in vectorised code over non-vectorised code". :-) But maybe it'd be better to keep figures out of it. 99% is awkward because I don't think we even have 100 benchmarks yet. And what about benchmarks like DENbench that run the same code more than once, but with a different data set? Does each data set count as a separate benchmark?
I would actually vote for each data set counting as a separate benchmark as it potentially exercises different code paths and we've got different things to look at. . Thus each benchmark that has a different workload constitutes a new benchmark to look at.
FWIW, all the examples I've seen so far are due to the over-promotion of vector operations (e.g. doing things on ints when shorts would do).
That's interesting to note. I'd be interested in trying to help figure out more such cases.
cheers Ramana
Richard
linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
On Fri, May 20, 2011 at 7:33 PM, Richard Sandiford richard.sandiford@linaro.org wrote:
Michael Hope michael.hope@linaro.org writes:
How about also 'Ensure vectorised code doesn't regress over non-vectorised code'? The goal would be for 90 % of benchmarks to not regress and 99 % to regress no more than 2 %. At the moment good 'ol CoreMark is worse with -O3 -omfpu=neon...
Well, I suppose if we're setting figures like that, then it's really "Limit regressions in vectorised code over non-vectorised code". :-) But maybe it'd be better to keep figures out of it. 99% is awkward because I don't think we even have 100 benchmarks yet. And what about benchmarks like DENbench that run the same code more than once, but with a different data set? Does each data set count as a separate benchmark?
I felt a bit silly writing the 99 % thing. How about 'Ensure vectorised code doesn't regress over non-vectorised in almost all cases. Ensure vectorised doesn't regress by more than n % in all cases.' with some type of escape clause for one benchmark which is too hard for this cycle.
-- Michael
At the moment good 'ol CoreMark is worse with -O3 -omfpu=neon...
It maybe worth to try -fvect-cost-model.
Ira
-- Michael
linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
On Sun, May 22, 2011 at 9:05 PM, Ira Rosen IRAR@il.ibm.com wrote:
At the moment good 'ol CoreMark is worse with -O3 -omfpu=neon...
It maybe worth to try -fvect-cost-model.
Worse again I'm afraid. -O3 -mfpu=neon scores 99 % of -O3. -O3 -mfpu=neon -fvect-cost-model scores 96 %.
-- Michael
linaro-toolchain@lists.linaro.org