Just couple of days someone ask my view on this project. Today got some time to go through it, below are my comments.
Project idea is written her http://wiki.smc.org.in/SoC/2013/Project_ideas#Automated_Rendering_Testing
1. I really like that someone picked this idea for GSoC. This is something we are looking from long time in UTRRS.
UTRRS has one basic drawback that manual intervention is needed to get actual rendering problem.
So if some intelligent algorithm program can automatically verify standard images and on the fly generated image from font to be tested and provide comparison that will be simply great.
When i gone through project title i thought this is going to be happen in this project.
I am not sure what is exact plan for implementation.
2. It is written in project idea "One method to do this might be to check the order of glyphs/glyph index output by the rendering engine - this depends on the font too."
Actually in Harfbuzz we are already doing this thing with scripting. Behdad has implemented best testing suite for testing rendering of complex script. It happnes following way.
Script first passes particular word and font to be tested to Uniscribe. It returns some glyphs id's/hex values in fonts.
Then script passes same word and font to Harfbuzz. It also returns some glyphs id/hex values in fonts.
Assuming fonts works 100% perfect in Uniscribe and considering it as a standard output. If harfbuzz output (hex values) matches with uniscribe (hex values) it means it is rendering perfect else test fails.
With this method in harfbuzz we already automatically testing millions of words.
3. In my humble opinion this project should be plan with my first point. I do agree it is tough and might be bit research kind but i think that is the right way. Harfbuzz is already doing automates rendering testing by passing values to unscribe and other open type layout engines.
Project idea is written her http://wiki.smc.org.in/SoC/2013/Project_ideas#Automated_Rendering_Testing
1. I really like that someone picked this idea for GSoC. This is something we are looking from long time in UTRRS.
UTRRS has one basic drawback that manual intervention is needed to get actual rendering problem.
So if some intelligent algorithm program can automatically verify standard images and on the fly generated image from font to be tested and provide comparison that will be simply great.
When i gone through project title i thought this is going to be happen in this project.
I am not sure what is exact plan for implementation.
2. It is written in project idea "One method to do this might be to check the order of glyphs/glyph index output by the rendering engine - this depends on the font too."
Actually in Harfbuzz we are already doing this thing with scripting. Behdad has implemented best testing suite for testing rendering of complex script. It happnes following way.
Script first passes particular word and font to be tested to Uniscribe. It returns some glyphs id's/hex values in fonts.
Then script passes same word and font to Harfbuzz. It also returns some glyphs id/hex values in fonts.
Assuming fonts works 100% perfect in Uniscribe and considering it as a standard output. If harfbuzz output (hex values) matches with uniscribe (hex values) it means it is rendering perfect else test fails.
With this method in harfbuzz we already automatically testing millions of words.
3. In my humble opinion this project should be plan with my first point. I do agree it is tough and might be bit research kind but i think that is the right way. Harfbuzz is already doing automates rendering testing by passing values to unscribe and other open type layout engines.
No comments:
Post a Comment