After 11 weeks of being in CSC318, I have much to think about and reflect upon. What have I learned? What has changed? What do I know now that I didn't before? The truth is, these questions are quite abstract. I could certainly list everything I have learned, but I don't believe any reader of my blog would benefit from such an act. Instead, I believe it is more important for me to explain how my mindset has changed. Before, I wasn't very creative or artistic. I would see people create beautiful works of art, and I would wonder, how did they create such a thing?
However, this class made me realize something important. Art isn't just the technical ability to create visual works - it is any expression of human creativity. You can be completely ignorant about the first detail of visual art or graphic design, yet in terms of mindset you can still be extremely artistic and creative. If you can imagine beautiful, innovative and unique creations that build upon your life's experiences while still being different from everything you have already seen, then for all intents and purposes, you are an artist. There are many products I have used and many gadgets that I have seen which I believe I could improve given my experience with this class, because I can now utilize my artistic abilities to rigorously evaluate products.
However, I believe there is one skill that I am still sorely lacking. How do I create art? This is a massive topic that people get degrees on. So it may be unreasonable for me to believe that it can be incorporated into a 12 week class that has so many topics to cover. However, I believe it is important to remember that this is 2017. There are many design tools out there that are geared to provide the best user experience. Anyone with the desire to learn and the right information and teachers can learn quite a few things in a few short hours. I believe it would have been better if at least 1 or 2 lectures was actually dedicated towards rapidly prototyping beautiful, artistic designs. I will probably learn this stuff myself on my own time because I want to learn. But I have many friends in this class who I believe aren't particularly enthusiastic about doing any extra-curricular activities. These people also mostly focused on doing the writing parts of the group project rather than the artistic part. I believe they would have benefited if the class actually had some specific lectures about visual design, such as how to use Photoshop or Illustrator.
Bibliography
www.adobe.com/Photoshop
www.adobe.com/Illustrator
CSC318 Blog
Friday 24 March 2017
Monday 20 March 2017
Combining Agile Software Development with User Centred Design
I found myself in an interesting situation this semester - two of my classes involved team-based development and collaboration. I took CSC301 which is a software engineering class as well as this class. What's more notable, however, is that in both classes, I was developing products that were aimed at a specific user-base and required very systematic approaches in order to create products that were compelling and useful to the target group.
In this class, my group and I are working to build interactive technologies for the homeless. In CSC301, my group and I are building HCI tools for people who don't have hands and arms. Dealing with marginalized groups in both classes meant that I was in a unique position to take my understanding of each class and apply it to the other class.
After 10 weeks of CSC318, I have found some of my opinions to have changed. Before, I was fairly confident that I knew a lot of things. I knew what users wanted. I knew how to make good apps. I knew good design from bad design. But after doing this class, I have found my own knowledge and opinions challenged many times. I thought I knew what was best for users, but that isn't true. There have been multiple occasions where user research, usability evaluation and literature review revealed the flaws in my own understanding.
So, I wanted to take the tools I acquired from CSC318 and use them in CSC301. If I am making interactive technologies for a select group of people, it would make sense to believe that user centered design is an ideal approach to the development process.
There is one problem, however: in the fast paced world of agile development (or to be more specific, scrum development which is what is practiced in CSC301), how can we take the time to formally conduct user centered design? I thought about this for a long time, but I came to a simple answer: you don't. It is important to realize that in the real world, it is very difficult to do things in a way that is 100% consistent with the theory. The theory very specifically states how user centered design is conducted across the development lifecycle, but the reality is that few teams will have the resources and time to follow this approach so perfectly.
Therefore, I realized that it may be more useful for me to take the most important things I learned from UCD and then somehow incorporate them into the agile process. Here is what I did:
- during the design stage (please see the diagram of agile development above to see which stage this is), instead of solely focusing on software specifications, UML, technological constraints and other things like that, also incorporate usability research and user research. Allocate some resources to find out what your users need and what the problem is. What are they lacking currently? It is very important to know before you spend resources on development whether your product is actually useful and whether people actually want it.
- during the build and configure stage, always keep your user research in mind. This may sound simple, but what I mean is that, very often, when software teams start development, they will become highly preoccupied with technical issues, technical research and all the other intricacies of developing highly complex software systems in the 21st century. It is easy to forget what your research showed you in the beginning. I believe it is very important to consistently keep the research in mind, because the final product must be a reflection of what your users actually want, not what you were realistically able to build after constantly changing the target product because of technical difficulties.
- during the testing stage, bring in actual usability testers instead of just unit testers or code readers. This is EXTREMELY important. If the people who are testing the code are people who have been involved in the software development process, chances are, they know too much about the code to judge the product in an impartial way. It is very, very important that the product is tested upon actual users before release so that your team does not end up releasing a product which the actual users do not like.
These are some of the observations I have made over the past couple of weeks as I have been involved in both HCI design and Software design. I hope you, the reader, learned a thing or two from reading this.
Pic sources:
https://image.slidesharecdn.com/edsusergrouppresentation-160206105225/95/user-centred-design-and-students-library-search-behaviours-3-638.jpg?cb=1455237084
https://lh6.googleusercontent.com/o_QZmtpH3XrDabJmS6CUQfOySx7dMGlWT0hKH-HL3OfMWazMHDyMffRRR1GZd2aPMpXqjnwmrto1eCQQ_BSbRui11820xRA0qeu2YZ9CqNUeWHqdwToSk5fdRVQT4vXy9A
In this class, my group and I are working to build interactive technologies for the homeless. In CSC301, my group and I are building HCI tools for people who don't have hands and arms. Dealing with marginalized groups in both classes meant that I was in a unique position to take my understanding of each class and apply it to the other class.
After 10 weeks of CSC318, I have found some of my opinions to have changed. Before, I was fairly confident that I knew a lot of things. I knew what users wanted. I knew how to make good apps. I knew good design from bad design. But after doing this class, I have found my own knowledge and opinions challenged many times. I thought I knew what was best for users, but that isn't true. There have been multiple occasions where user research, usability evaluation and literature review revealed the flaws in my own understanding.
So, I wanted to take the tools I acquired from CSC318 and use them in CSC301. If I am making interactive technologies for a select group of people, it would make sense to believe that user centered design is an ideal approach to the development process.
There is one problem, however: in the fast paced world of agile development (or to be more specific, scrum development which is what is practiced in CSC301), how can we take the time to formally conduct user centered design? I thought about this for a long time, but I came to a simple answer: you don't. It is important to realize that in the real world, it is very difficult to do things in a way that is 100% consistent with the theory. The theory very specifically states how user centered design is conducted across the development lifecycle, but the reality is that few teams will have the resources and time to follow this approach so perfectly.
Therefore, I realized that it may be more useful for me to take the most important things I learned from UCD and then somehow incorporate them into the agile process. Here is what I did:
- during the design stage (please see the diagram of agile development above to see which stage this is), instead of solely focusing on software specifications, UML, technological constraints and other things like that, also incorporate usability research and user research. Allocate some resources to find out what your users need and what the problem is. What are they lacking currently? It is very important to know before you spend resources on development whether your product is actually useful and whether people actually want it.
- during the build and configure stage, always keep your user research in mind. This may sound simple, but what I mean is that, very often, when software teams start development, they will become highly preoccupied with technical issues, technical research and all the other intricacies of developing highly complex software systems in the 21st century. It is easy to forget what your research showed you in the beginning. I believe it is very important to consistently keep the research in mind, because the final product must be a reflection of what your users actually want, not what you were realistically able to build after constantly changing the target product because of technical difficulties.
- during the testing stage, bring in actual usability testers instead of just unit testers or code readers. This is EXTREMELY important. If the people who are testing the code are people who have been involved in the software development process, chances are, they know too much about the code to judge the product in an impartial way. It is very, very important that the product is tested upon actual users before release so that your team does not end up releasing a product which the actual users do not like.
These are some of the observations I have made over the past couple of weeks as I have been involved in both HCI design and Software design. I hope you, the reader, learned a thing or two from reading this.
Pic sources:
https://image.slidesharecdn.com/edsusergrouppresentation-160206105225/95/user-centred-design-and-students-library-search-behaviours-3-638.jpg?cb=1455237084
https://lh6.googleusercontent.com/o_QZmtpH3XrDabJmS6CUQfOySx7dMGlWT0hKH-HL3OfMWazMHDyMffRRR1GZd2aPMpXqjnwmrto1eCQQ_BSbRui11820xRA0qeu2YZ9CqNUeWHqdwToSk5fdRVQT4vXy9A
Sunday 5 March 2017
Evaluating the Nintendo Switch UI as a UX Designer
The Nintendo switch console was released a couple of days ago. This class provides a good opportunity to evaluate the UI of a modern, interesting device from the perspective of a UX Designer. We won't focus on functionality and instead dedicate our efforts on the use experience.
When we start up the console, this is one of the first screens we are met with - this is a screenshot of the menu screen:
From the start, it is apparent that the UI makes good use of minimalism and Fitt's law - the user is not presented with a huge amount of information or a lot of buttons and everything is large and visually appealing and sufficiently separated to prevent any wrong presses. Everything is quite natural- things react how you would intuitively expect them to and there is very little waiting - the UI is fast and responsive and doesn't test the users' patience.
The UI also makes good use of color and graphic design - colors are complimentary and everything seems consistent - the buttons have a cartoonish feel that makes sense for the target user-base and the whole UI has a 'flat' design, somewhat like current generation iOS iterations, which is a trendy type of UI that is seen a lot in 2017. The color scheme uses similar colors within the same feature (for example within one button) or contrasting colors between different features (such as between different buttons).
However, there are some shortcomings. For one, the keyboard isn't consistent with present day designs of touchscreen keyboards - the predictive touch isn't very accurate and the spellchecker is also quite unsophisticated compared to present day offerings. Also, some things aren't obvious. For example, given the above menu, how do you start the browser? One of the trade-offs to having a minimalist UI is that sometimes, obvious actions are no longer obvious.
All things considered, I am quite pleased with the UI. As someone who has used previous Nintendo consoles and was extremely frustrated at certain things, such as the lag, the inconsistency, the blocky text etc, I am glad that Nintendo focused on user-centred design and addressed many of the issues that people might have had with previous consoles.
When we start up the console, this is one of the first screens we are met with - this is a screenshot of the menu screen:
From the start, it is apparent that the UI makes good use of minimalism and Fitt's law - the user is not presented with a huge amount of information or a lot of buttons and everything is large and visually appealing and sufficiently separated to prevent any wrong presses. Everything is quite natural- things react how you would intuitively expect them to and there is very little waiting - the UI is fast and responsive and doesn't test the users' patience.
The UI also makes good use of color and graphic design - colors are complimentary and everything seems consistent - the buttons have a cartoonish feel that makes sense for the target user-base and the whole UI has a 'flat' design, somewhat like current generation iOS iterations, which is a trendy type of UI that is seen a lot in 2017. The color scheme uses similar colors within the same feature (for example within one button) or contrasting colors between different features (such as between different buttons).
However, there are some shortcomings. For one, the keyboard isn't consistent with present day designs of touchscreen keyboards - the predictive touch isn't very accurate and the spellchecker is also quite unsophisticated compared to present day offerings. Also, some things aren't obvious. For example, given the above menu, how do you start the browser? One of the trade-offs to having a minimalist UI is that sometimes, obvious actions are no longer obvious.
All things considered, I am quite pleased with the UI. As someone who has used previous Nintendo consoles and was extremely frustrated at certain things, such as the lag, the inconsistency, the blocky text etc, I am glad that Nintendo focused on user-centred design and addressed many of the issues that people might have had with previous consoles.
Sunday 19 February 2017
Tips to Perform User Research on Marginalized Groups
For my group project, my team was developing a product to help the homeless. While this seems like a respectable goal, it soon became clear that we were facing one massive hurdle that threatened to undermine our entire development process - how could we effectively interview homeless people while minimizing any risk to the researcher, finding participants in an ethical manner with no incentives given to them and get answers that were relevant and meaningful and actually helped us better understand our user base?
From my experience doing this project, I came up with a list of tips to keep in mind that can very effectively increase the quality of the research and reduce the effort needed to come up with a group of pilot testers from your target audience.
From my experience doing this project, I came up with a list of tips to keep in mind that can very effectively increase the quality of the research and reduce the effort needed to come up with a group of pilot testers from your target audience.
- Understand Who Your Userbase Is
This may seem like a simple point, but it is crucial to have some understanding of your user base before actually starting the User Centered Design Process. When we first decided we wished to create interfaces to help the homeless, we neglected one very important point - this marginalized group is most likely going to be very difficult to conduct research on because this group of people are unlikely to have the patience or interest to sit down and do long, complicated interviews. We had to retroactively change what research script we used very late in the process to a questionnaire because we couldn't find willing participants who were willing to do interviews with us. - Know What Actions Can Alienate Them
Since you are dealing with Marginalized Groups, it is important to remember that they are in the fringe of society and that there can often be psychological issues relating to some of the things they have to deal with. When we talked to one homeless person, we realized that he became very annoyed when we asked him about his finances, or when we asked him questions like 'what do the homeless do about .....?' It was clear that he was not doing well financially and it was frustrating for him to have to talk about this. Also, he was homeless and having to constantly hear this annoyed him. We realized we should rewrite our questions like 'What do you do about.....' instead of 'What do homeless do......' - Know Which Organizations Already Help Them
If it is difficult to find user testers for your research, it may be helpful to contact organizations that understand this group and already have connections with them. They themselves can provide valuable information about this userbase and provide you access to their network of marginalized individuals. - Keep the Safety of Researchers in Mind
The unfortunate truth is that many marginalized groups are found in high-risk situations that can also endanger your researchers. One of our group members had previously done research with a marginalized group and she reported that one of her previous team members contracted a disease from not taking appropriate precautions when interacting with this marginalized group. It is important that we help these groups but it is absolutely crucial to not compromise the safety of your researchers. Take steps to minimize any risk that can come from conducting research. - Collaborate With Other Groups Performing Similar Research
While this isn't always possible to do, our internet searching revealed many other groups that were doing somewhat similar research, While we didn't directly collaborate with them, we learned a lot about how the User Centred Design process applies to the homeless from reading their posts. If you are finding it very difficult to conduct research, consider teaming up with groups doing similar research and pool your resources - having to share data is less deal-breaking than having no data at all.
Sunday 5 February 2017
Microsoft's Vision of the Future
In 2009 (which was a whopping eight years ago), Microsoft released a video showing a vision of what they believed the future of Computer UIs would look like (possibly in 2020). I only recently came across this video, but to me, even now, nearly a decade later, some of the technologies seem completely highfalutin - things that are still in the realm of sci-fi and nowhere near ready for consumers.
The first thing that this reminded me of is a topic we discussed in class a couple of weeks ago - it takes decades for a tech idea to go from the furthest reaches of the mind's of researches or visionaries to the production chain to the pockets or desks of consumers (although with the current Internet-of-Things trend, tech goes further than just your pocket or your desk!) It takes quite a few more years for new technology to mature and become highly usable and functional. The first touchscreen phone came out in the '90s. Even now, after all these years, companies release new touch interactions that further enhance productivity and usability. While the layman may believe that technology changes quickly, the reality is that it takes years-and-years before a single compelling idea can become widespread in the consumer arena.
Now, going back to the video, I wondered - some of these things certainly look impressive, but how far away are we really before we have access to such things? It seems I am not the only person to have asked myself this question, because I found my answer in the comments
Roman Yoshioka provides the following breakdown of present-day approximations to some of the conceptual technologies seen in the video:
Just to give you all some hope, these are the things that are already possible:
0:08 (Do you have a cat? - Real time translation): Skype translator
0:38 (Shannon's stuff - Remote classroom): Google Classroom
0:48 (Work Schedule - Digital calendar): Outlook, Google Calendar and every other calendar app 1:08 (Flight tracking) I can only recall Cortana and Google Assistant doing this.
1:35 (Air gesture): Leap Motion
2:09 (Holographic info): Microsoft HoloLens, though with a huge device on, and Google Glass
1:44 (Foldable phones): [RUMOR] the Surface Phone
2:55 (Screen hubs): Surface Hub
3:28 (Voice assistants): Cortana, Google Assistant, Amazon Alexa, Hound by SoundHound, etc.
3:51 (Fingerprint recognition): Has been around for quite a while
4:29 (Smarthomes): Maybe a combination between Alexa and Surface Hub? Possible but not done 5:12 (Image recognition): Microsoft CaptionBot, Google Photos can do it too
5:29 (Green roofs): Possible but barely implemented
In other words, it seems we are at that point in time where truly next-gen computer UI is in that experimental, still-developing stage just as touchscreen phones were still being refined in the 90's. When I first started taking this class - a single, worrying idea worked in the back of my head. Do we really need any new innovations in UI? The keyboard and mouse are decades and decades old yet they are still the preferred choice for interacting with PCs. Touchscreens are ideal for smartphones and tablets and it doesn't seem like there are any better alternatives in the horizon. What else do we really need? But then again, what if the first GUI designer asked why we needed GUIs when the command line works so well? Why do we need more RAM? As one tech visionary so foolishly asked. It is human nature to be complacent. But this video indicates that are still innumerable things that could become more functional had they sported better UIs. It is our job as scholars of UI to find those things, document them, experiment with them, work with them, refine them, create them. This 8-year-old video has given me the inspiration I needed to believe that there is still so much work to be done before humanity can truly stop and say yep, now our work is really done.
Wednesday 25 January 2017
Bad Dpi Scaling: Bane of current day UX?
We live in a world where every company is forever in pursuit of increasing pixel density. There was a time when quad-HD (a resolution standard) was considered off limits to all screens except for very high end computer monitors used by content creation professionals. Now, any flagship phone released in the past 2-3 years will sport that particular resolution, with rumors of even higher resolutions in the pipeline.
In principle, it could be argued that this could possibly better the user experience. It is true that not being able to resolve individual pixels makes the experience of using a screen more pleasing for the end user. However, there is evidence that the human eye is unable to appreciate any improvements from 300 ppi onwards. This doesn't seem to have swayed companies from releasing ever more high density screens, however.
Take, for instance, this flagship monitor announced by Dell in CES 2017:
Now here is what the regular Windows 10 UI looks like:
Are there any differences that are immediately noticeable?
For one thing, the bottom screen seems to be consistent with Fitt's law. Icons are large, various menu elements are well separated, the text is easily resolvable by the human eye etc. The first picture, on the other hand, sports a UI that is tiny. Keep in mind that the monitor is 32 inches diagonally. So a person may be sitting a distance of 1.5-2.5 feet away from the monitor. Even from those distances, effetively manipulating the UI will become a massive hassle and will disrupt regular workflow.
The thing is, it doesn't have to be this way. There are many systems that incorporate perfect UI scaling such that low end, low res monitors and high end, high res monitors can both sport UIs that are optimal and are scaled consistently. There are algorithms that can granularly scale a UI instead of having to have a programmer "hard-wire" a couple of pre-set UI sizes for a couple of pre-set resolutions. I think this is one of those cases where we can appreciate a real world difference between 'coders' and 'computer science professionals' and 'hci professionals' because a coder unfortunately cannot appreciate the importance of UX in the same way someone familiar with the field can. They also may not be able to understand the idea that UI can inductively be scaled regardless of resolution, which is why they stick to "hard-wiring" only a couple of pre-set UI sizes.
I hope this disconnect between sub-optimal UI and ever-increasing resolution is addressed soon because otherwise, it is the end user who has to suffer.
In principle, it could be argued that this could possibly better the user experience. It is true that not being able to resolve individual pixels makes the experience of using a screen more pleasing for the end user. However, there is evidence that the human eye is unable to appreciate any improvements from 300 ppi onwards. This doesn't seem to have swayed companies from releasing ever more high density screens, however.
Take, for instance, this flagship monitor announced by Dell in CES 2017:
Now here is what the regular Windows 10 UI looks like:
Are there any differences that are immediately noticeable?
For one thing, the bottom screen seems to be consistent with Fitt's law. Icons are large, various menu elements are well separated, the text is easily resolvable by the human eye etc. The first picture, on the other hand, sports a UI that is tiny. Keep in mind that the monitor is 32 inches diagonally. So a person may be sitting a distance of 1.5-2.5 feet away from the monitor. Even from those distances, effetively manipulating the UI will become a massive hassle and will disrupt regular workflow.
The thing is, it doesn't have to be this way. There are many systems that incorporate perfect UI scaling such that low end, low res monitors and high end, high res monitors can both sport UIs that are optimal and are scaled consistently. There are algorithms that can granularly scale a UI instead of having to have a programmer "hard-wire" a couple of pre-set UI sizes for a couple of pre-set resolutions. I think this is one of those cases where we can appreciate a real world difference between 'coders' and 'computer science professionals' and 'hci professionals' because a coder unfortunately cannot appreciate the importance of UX in the same way someone familiar with the field can. They also may not be able to understand the idea that UI can inductively be scaled regardless of resolution, which is why they stick to "hard-wiring" only a couple of pre-set UI sizes.
I hope this disconnect between sub-optimal UI and ever-increasing resolution is addressed soon because otherwise, it is the end user who has to suffer.
Friday 13 January 2017
First Post! Hello World!
Welcome to my first post on my CSC318 Blog!
I am a student of computer science in my final year of study. Over the past couple of years, I have learned various things that have deepened my knowledge on the mysterious black boxes that power so many facets of our lives in the 21st century. As I started to learn more and more, however, one thing became clear - much of what is learned in Computer Science revolves around the abstract. For example, as someone who has taken an advanced operating systems course, I can describe various esoteric algorithms on designing the scheduler of a kernel or how to prove the correctness of a concurrent algorithm on synchronizing threads or how many faulty aspects of a system can be reduced to an instance of the Byzantine General's and so forth.
But if I were asked to describe how the GUI in linux works, I would be hard pressed to go into any more detail than just stating some of the superficial aspects of the X windowing system. This has made me realise that I want to know more about GUIs - both the computations behind what make them possible as well as the more abstract and philosophical aspects of what constitutes good design. I want to be able to look at a user-oriented application and confidently identify whether or not it employs good design principals. I want to know more about the history and current state of the art of UX.
I am very confident that CSC318 will be a useful course as I enter the latter part of my degree. I hope this blog can bring some enjoyment to you, reader!
T.K.
I am a student of computer science in my final year of study. Over the past couple of years, I have learned various things that have deepened my knowledge on the mysterious black boxes that power so many facets of our lives in the 21st century. As I started to learn more and more, however, one thing became clear - much of what is learned in Computer Science revolves around the abstract. For example, as someone who has taken an advanced operating systems course, I can describe various esoteric algorithms on designing the scheduler of a kernel or how to prove the correctness of a concurrent algorithm on synchronizing threads or how many faulty aspects of a system can be reduced to an instance of the Byzantine General's and so forth.
But if I were asked to describe how the GUI in linux works, I would be hard pressed to go into any more detail than just stating some of the superficial aspects of the X windowing system. This has made me realise that I want to know more about GUIs - both the computations behind what make them possible as well as the more abstract and philosophical aspects of what constitutes good design. I want to be able to look at a user-oriented application and confidently identify whether or not it employs good design principals. I want to know more about the history and current state of the art of UX.
I am very confident that CSC318 will be a useful course as I enter the latter part of my degree. I hope this blog can bring some enjoyment to you, reader!
T.K.
Subscribe to:
Posts (Atom)