-
Notifications
You must be signed in to change notification settings - Fork 0
Meeting12.16.2016
Rob: Tic Tac Toe, NCAM work similar to that game : keyboard users with tab key option in toolbar tab to option space to select it, and then moves them to potential containers and highlighted those tab/shifttab and space to drop and SR can give updates (conflicts with jaws. Shift space to allow that) pure keyboard nav. Challenge alt-tagged so students know where they are dropping things. Not currently live, some challenges some legacy interactions which still pose problems.
Jesse: working on example with balloons with static elec. pick up charges from sweeter. and move the ballon on its own: challenge 2D area, tab to balloon and space to pick it up and arrows to drag it around to sweeter or wall , but run into technical issues jaws doesn’t like arrow keys unless you are moving from grid to grid. and all screen readers a intercepting keystrokes.
Even jaws users go there to use the keys. to get around WASD keys for moving virtual curser.
Descriptions where ballon is around other objects.
Jason: when I go back to wiki to add contextual information if its clear visually fine for people to see the visual output with switch/keyboard input, but non visual output appropriate contextual information and what aspects of path from source to destination must be considered.
Jesse: agreed, on description to describe where the object is between them, but more objects and described where things are and where they are would be difficult to describe and like to see this more generalized.
Jason: Right.
Richard: problem to solve, making block languages like scratch or blocky accessible. currently they are not hour of code more than 1/2 are block language programming. Using a tablet (chosen you can touch and get auditory feedback to locate things) if you wanted to do drage and drop, you can find the toolbox with a tool and move them to a portion of your code and drop it. they are nested loops, if statements. the youngest students won’t be using too many of those. SO the approach select a tool from toolbox and then you don’t have to drag it per say you can just find where you want drop it and then explore where you want to put it in and then you double tap again and it could ask above / below where you want. the main feature around the screen you are in exploration mode . hopefully you can do it in reverse order. you don’t want to interrupt the normal exploration with moving your finger around the screen and a double tap to select or put an item. advantages of touch screen you get that exploration and touching or flicking finger to go through the items sequentially. Initially they won’t have anything but a straight line program and building up.
What Jesse is doing with Phet…. blocky is a webbased thing and Google is working on an android version. we are working with Google on this. partnering with Google and blocky . there are lot of touch screens we are not going to work with those, there are some larger 18” touch screens so this could work for low vision since everything is bigger, but a 10” or 7” diagonal is what we are thinking, not sure about cell phones but we will see.
Jason: faster for user to find insertion point and then find the toolbox , so it can tell you what you can put in there.
Richard: great point about this
Jason: you may want to choose what you are inserting first so having both options.
Richard: we are really experimental at this point and find the informative studies with people are blindfolded to get some ideas.
Jason: minor point if you do insert something, whether it has to be above / below, but double tap and slide above or below or left/right could be an option.
Richard: navigating the program that has a lot of tree structure gestures to nav a tree, to make that easier.
Jason: I tried to do that in the wiki but think it is some value in the.
Simple case is really easy two selection operations.
Select from one bucket and it is in a state whereby all destinations are active and then then you can pick a destination.
Richard : agreed, missing is exploration stage, you have these objects or balloons and sweeter to accumulate charges.
Charles: that’s taking the simple example and then you have to do the journey or the path before you get to the destination and you drop it and see what happens.
Richard: I think what Jessie is saying is that the path is important too. Because if you don’t rub the balloon against the sweater as part of the dragging you don’t get the effect. In the cases Jason and I were talking about, you don’t want your exploration to interfere with the drag and drop, it’s just select select.
Jason: I should say what we are doing here at the moment is that we are working on a purely spoken interface to the same static electricity example. That’s under development and we will have something to say about it when we have progress.
Charles: If we take the really simple case where we select something from a given set and you are going to drop it in to another step, what do we have to do programmatically to make that happen as far as the magic markup? Can we take notes from Aria, Aria is now deprecating drag and drop, why do we think that is and what are we going to do in its place?
Jason: I was in the Aria working group meeting so I have some idea why that was. They had concerns about the approach used in Aria 1.0. That it hadn’t been properly implemented. There were also concerns that most of the real cases turned out to be the kind of simple selection operations that we just talked about. You don’t need special Aria attributes from drag and drop if you’re going to be making two selections. But if there is something more complicated involved there isn’t consensus on how that should be handled. It was suggested that if they couldn’t find good use cases of the existing model in the wild, it was time to reevaluate and work out how they wanted to proceed to handle the more complicated cases.
Jessie: So was the consensus it would be deprecated and removed and up to the developers to implement their own solutions, or will a new markup be introduced?
Jason: At the moment it’s deprecated with the intention to remove it, but they are open to new proposals in that area. The issue really was that there aren’t many good use cases in which it wasn’t just a simple select two points operation. So they would be interested in a better analysis of the problem. It would need to be well grounded and proposed in a way that would satisfy implementation concerns.
Richard: One thing I should mention about the original scratch, is that the drag and drop in there it was not a select select scenario. You had to grab it and smoothly drag it, and then let go. So there are people doing drag and drop that is not select select at all. That would never be accessible, it’s not even accessible to people with mobility issues because they don’t have that motor control. So how do you get the word out not to do it in a way that’s not accessible no matter what?
Jessie: That annoyed me about scratch, I had to keep reorganizing my blocks.
Jason: For people who are key board users or use something analogues they can use arrow keys to get there and if they can see it visually they can position it and hold keys down if it doesn’t move to fast. It’s not efficient, but they can get it where they want it. I don’t know how someone would manage if they have a single switch device. When you need to consider non visual output it complicates the problem. We could do some kind of highlight that moves two dimensionally so it scans horizontally first, moves down a little bit and then moves down horizontally again and they can press when it reaches the right spot.
Charles: that would be interested. Something we need to consider for DIAGRAM because of our whole not just visual disabilities, but motor as well and even cognitive. We have to start to think of all of these other types of disabilities when we start to come up with these designs and have solutions for keyboard users, switch users and any others we should consider.
Jason: there is the speech user too.
Richard: There is a project out of U of Washington called Voice Draw. I’ll have to sell that around. Voice Draw was a way with vowel sounds that you could move around. If you have a switch it can be the select part of it and you move the mouse with your voice using vowel sounds. It’s not out publicly that I’m aware, it was a research project, but it’s possible. I’ll send that out. I don’t have access to the wiki page, so maybe someone can share that with everyone.
Charles: do you have a GitHub account?
Richard: yeah, through U of Washington.
Charles: Send me you information and I’ll add you and I’ll add you to the wiki too.
Richard: If you do a Google search on voicedraw you might get a YouTube video of it.
Jason: the only other thing I can think of would be if you had four quadrants of a place you could divide them up. That would produce another set of axes within each plane and you could sub divide that and eventually get to the area you are looking for.
Charles: depending on how many different switch interactives you had, if you had four things you could subdivide by 4 if you had two you could sub divide by 2.
Richard: for two you could do vertical and horizontal and keep honing in. I think it’s been explored in research, but I’d have to find it. There is a whole bunch of other things in literature about things like when the mouse gets in the vicinity of one single target the target gets really big so you don’t have to be so fined grained. Things like that have been explored.
Charles: I remember a movie that was like that, when you got close to something it got bigger and if you missed it got even bigger, until you got it. Some type of strange animation.
Richard: None of these things work on the web, they are all native apps.
Charles: So Jason, one thing that I want to try and do is get some code examples into our wiki repository and eventually into the DIAGRAM accessible repository. What do you think? Is that something we could do and see how we could code up something like this? Maybe take some of the simple example, or doing the idea of dividing quadrants.
Jason: I think if there are people working on some examples already or are planning to, that would be useful to know. There are some around being used for various purposes. I think Jessie sent out the details of the ones that were implementing the sorting operation. Those were interesting, they had a very effective key board interface. I think locating existing examples is a good start and if we find holes we can get them written.
Charles: so my thing to you all, is if there is anything open source that you can share as examples I’d love to have them to put in here and eventually our more broad repo, but for now the drag and code repository in GitHub. Does anyone have anything that could send in the next week or so?
Jessie: I think the examples of the sorting for the list are open source and we should be able to find the code examples, I can do a quick search and send along.
Jason: I wonder if the Aria authoring practices people might have something. Gunderson might have something that we could consider.
Charles: Do you have his email or a relationship to ask him for that?
Jason: I know him a little and have his details. We could see if he has anything of note. I’m trying to think if there are any around here. But I can’t think of anything good. All of the work done here is still to be done. We might have something interesting to share later on, but it hasn’t been written yet.
Rob: What I have I need to get permission on. I could at least look into it.
Richard: I don’t have anything right now, but I might in a couple of months that we can demo and share, but right now it’s just informative. And it’s not web based, we gave up on that it’s too difficult. So we’re doing Android based right now. Our target is universal design, the same thing will work for everyone, and it will be young kids.
Charles: If it becomes open source and you can share Android based is fine. We’re trying to come up with different languages so having something for Android is exactly what we are looking for.
Richard: there is a group in Google that is working on accessible technologies in this space and they are also working on education tech generally for computer science for all, so I think that is open source already. I’ll give you a link so you can at least see it.
Charles: This was great everyone, thanks and happy holidays!