|author||mperes <mperes@web>||2014-10-10 08:33:27 -0700|
|committer||xorg <firstname.lastname@example.org>||2014-10-10 08:33:27 -0700|
Diffstat (limited to 'Events/XDC2014/XDC2014ThibaultAccessibility.mdwn')
1 files changed, 3 insertions, 0 deletions
diff --git a/Events/XDC2014/XDC2014ThibaultAccessibility.mdwn b/Events/XDC2014/XDC2014ThibaultAccessibility.mdwn
index 99839412..7af1d2e5 100644
@@ -3,3 +3,6 @@
Yes, it does make sense for a blind user to use a graphical desktop. More generally, it has to be accessible to a very wide range of handicaps, so we need to make sure that accessibility tools can plug into the stack to get, modify or inject information, so as to compensate the user's disabilities.
I will briefly remind the path of an 'a', from pressing the keyboard key to the application, i.e. the input part, and then from the application up to its rendering on the screen, i.e. the output part. I will then explain how various tools plug into various places. On the input side, there are virtual keyboards, braille keyboards, AccessX keyboards,... and infamous issues between keycodes and keysyms, which we could perhaps take the opportunity to fix with Wayland. On the output side, there are magnification tools, color modifiers,... and last but not least, screen readers, for which I will explain the accessibility bus (at-spi) basics.