注目
xDiversity makes technology accessible to all of us through workshops on AI and sensing technology.
By crowdfunding and organizing workshops, xDiversity has made technology accessible to us in a way that we can feel it firsthand.
By crowdfunding and organizing workshops, xDiversity (Cross Diversity) has made technology accessible to us in a way that we can feel it firsthand.
One of the problems facing Japan is the declining birthrate and aging population.
According to a 2019 survey by the Japanese government, the percentage of Japan's population over 65 years old will reach 28.4% in 2019.
In the same survey, the population over 65 is projected to be 35.3% in 2040 and 38.1% in 2060.
Japan's total population peaked in 2010 and is on a downward trend, and the percentage of the total population aged 64 and below is expected to continue to decline.
Because the young population that supports welfare is expected to continuously decrease.
xDiversity, a research project in Japan, is conducting research and development with the aim of using technology to solve problems caused by diversity that modern Japanese society is unable to cope with.
They consider the decline of physical abilities and physical disabilities caused by the aging of people as their individuality, and try to optimize the field that cannot be optimized by the social infrastructure.
I am working in the field of welfare for the mentally disabled in my job, and so I support xDiversity's efforts.
Japan has one of the fastest aging populations of any developed country in the world.
Welfare for the elderly, disabled children, and people with disabilities in Japan is an field with many challenges that can be solved by AI, data, robotics, and other technologies.
I believe that their actions are noteworthy both for the welfare of the mentally disabled and for the future of many developed countries.
I supported xDiversity through crowdfunding and participated in their workshops.
By actually using various technologies in their workshops, we can become more familiar with technology and the problems it can solve.
In this article, I would like to report on their workshop and introduce their efforts.
xDiversity uses AI, data, and robotics to remove a variety of obstacles.
Yoichi Ochiai
Hologram
University of Tsukuba Associate Professor / Pixie Dust Technologies CEO
Yusuke Sugano
Computer VisionThe University of Tokyo Associate Professor
Ken Endo
RoboticsSONY CSL / Xiborg
Tatsuya Honda
UI DesignFUJITSU LTD. / Ontenna project leader
Their research area, which was selected for JST CREST, is "Design and Deployment of a xDiversity AI platform for Audio-Visual-Tactile Communication towards an inclusive society”.
Japan has one of the smallest budgets for research and development among developed countries.
They use corporate sponsorship and crowdfunding to promote their projects.
Crowdfunding is a good way for ordinary people who are not affiliated with a company or research institution, but who support the activities of the organization, to participate in its activities.
We support organizations, and through their outcome reports and workshops, we can feel the social implementation of technology close to us.
Some of xDiversity's various projects.
Self-driving wheelchair
Retinal projector
Ontena, The device that represents sounds through lights and vibrations
Artificial limb technology for body enhancement
Spatial sound representation device
Report on the workshop organized by xDiversity.
I participated in two workshops that were of interest to me.
Workshop Report: Let's make data of your movements.
In this workshop, we were able to experience the use of sensors to measure various conditions of the human body and acquire data for use in some kind of application.
Experience creating a rule-based application.
We used the M5StickC as a device to experience measuring different states of our body and creating rule-based applications.
Using the gyroscope sensor in the M5StickC and a programming application similar to Scratch, we had the experience of creating a rule-based AI application.
Using the gyro sensor in the M5StickC and a programming application similar to Scratch, we created a simple rule-based AI application.
![]() |
| Comparison between the M5StickC and the level meter with built-in camera |
According to my experiments, my camera seemed to allow about ±1.5 degrees of horizontal to be displayed as horizontal.
If anything other than perfectly horizontal is indicated as not horizontal, then the camera's posture must be controlled with very severe precision, which puts a lot of stress on the user.
We have learned that for devices used directly by humans, setting the acceptable accuracy affects usability.
Experience building machine learning based applications.
Google Teachable Machine allows you to create simple applications using image recognition technology in your browser by input from your camera or by uploading images.
I have created an application that uses my laptop's camera to detect my dozing state.
I am tormented by sudden strong sleepiness during the day due to my unstable blood sugar levels.
It is unavoidable, and in order to optimize the impact of the dozing state on my work performance, I envisioned an application that, instead of preventing the dozing state, would identify the average frequency of the dozing state, define that state as my average performance state, and warn me of the impact on performance when large outliers from that occur.
![]() |
| Design document for my doze sensing application. |
In order to detect the dozing state in this application, machine learning needs to classify the state in which I am concentrating and the state in which my concentration is impaired by the dozing state.
I used the camera on my laptop to learn my dozing state (I mimicked falling asleep) and the various gestures that might occur when I was focused but dozing.
![]() |
| Learning about my doze sensing application. |
Google Teachable Machine is excellent and was able to determine that my cheek swipes, neck stretches, and body leaning were not dozing off.
![]() |
| Report of my doze sensing application |
Feedback by the instructor.
There is a way to keep recording my state with a camera while I am at work, and have the machine learn by cutting out the dozing state from it.
Since I should be able to perceive when I wake up from a doze, I record the time when I wake up.
Then, from a large number of recordings, only the state immediately before the recorded time can be considered as the dozing state.
In addition to that, Deb Roy shared with us an episode of his research where he installed sensors all over the ceiling of his house to keep track of his activities.
Other participants were trying out ideas for applications that would detect their poor posture at work, or that would detect if a child was falling or at risk of injury in a public space.
According to the instructor's feedback, conditioning the camera to recognize a child in a group is very difficult.
If it tries to discriminate by height, it will not be able to discriminate between adults of short stature.
However, humans are able to distinguish between children and short adults.
What is it that makes humans recognize children as children? How to make AI learn this is a very difficult approach.
It is expected that height-based discrimination will be able to distinguish between adults and children in many cases, but for xDiversity, the approach to minorities is an important theme.
Since every sensor has its strengths and weaknesses, it was shown that it is important to use a multimodal approach.
We had the experience of building both rule-based and machine learning-based applications during the workshop.
We felt that we would act as translators between human activities and computer applications, imagining what information would be exchanged and how it would be approached.
Workshop Report: Let's Build a Machine Learning Application.
In this workshop, we learned to design machine learning applications by applying the content of the previous workshop.
According to the lecturer, when creating machine learning applications, the division of labor between design actions and implementation is difficult, and it is useful for designers to learn machine learning.
The lecturer encouraged us to think together about the differences between designing AI using machine learning and a regular design workshop, through the experience of actually creating AI.
Tatsuya Honda also provided us with a lecture on the history of design.
Historically, design and engineering have always been closely related.
Based on the philosophy of human-centric design, it can be interpreted that designing a machine learning application itself is not a design act in itself.
However, when it comes to actually creating machine learning applications, it is essential to unravel how humans perceive and process the world, and design acts to replace them with machine learning.
Tatsuya Honda, the designer of Ontena, held a workshop in the past to create a machine learning application that would make Ontena vibrate with certain sounds.
After a basic lecture on machine learning, participants were paired up with a deaf person and a hearing-impaired person to try to create a machine learning training model.
It is difficult for deaf people to learn sounds on their own, and it is difficult for hearing people to understand the problems of the hearing impaired.
This workshop is expected to have the effect of helping pairs of deaf and hearing-impaired people to recognize each other's perception of the world.
Yusuke Sugano provided us with a basic lecture on machine learning.
In the design of things, it is easy for research and design to be divided, and the interests of the researcher and designer can easily be aligned.
In the case of AI, however, researchers themselves are involved in the design process, which tends to be an applied research approach based on existing theories and the idea that such an application would be useful.
How can we create a model that is conceived by the user, rather than an application of a model created by an expert?
It is both a technical issue and a community design issue.
Designing User Experiences with Machine Learning Applications.
![]() |
| Leaning about my face classification application. |
In Japan, there is a concept of soy sauce face and sauce face.
Sauce face means a wild and sculpted look, while soy sauce face means a contrasting Asian face.
I collected images of famous people on the Internet that were classified as soy sauce faces or sauce faces and trained Google Teachable Machine.
Not only that, but we also tried to train the soy sauce itself and the sauce itself.
We were interested in what would happen if we mixed non-human face training data into a learning model for classifying human faces.
We also have a healthy and light impression of soy sauce and a strong impression of sauce.
This taste image gives rise to the concepts of soy sauce face and sauce face, but machine learning should not be able to understand this association.
Therefore, it is not useful as a machine learning classification, but using this learning model, we can classify how much of the subject's face is soy sauce face and how much is sauce face, and at the same time, display a very small amount of soy sauce itself or sauce itself.
We hoped that this would provide new inspiration for human communication.
![]() |
| Report of my face classification application. |
There is a difference between us humans and machine learning in the concepts we have acquired.
We expected that by bringing in randomized elements instead of predetermined user experiences, the user experience would be shaken up and new concepts would be born.
Although I was not able to design well in the short workshop, I was hopeful that the inclusion of uncertainty in the design of the user experience would create new and interesting communication.
What I felt through the workshop.
In the Let's Turn Your Movements into Data workshop, we touched a small computer equipped with sensors and did some simple programming and repeated trial and error.
It was a fun experience to see the program that we wrote with our own hands work for us, and to struggle to describe in the program what we wanted to do in our minds.
I was able to imagine what kind of rules are written in the devices and applications that I use everyday, and my resolution of the technology around me increased.
In the workshop, we were able to learn through experience what kind of problems we should set and what kind of conditioning we should consider in order to solve them.
If I were to talk with them about their sense of challenge and approach, I'm sure I would find something we could agree on.
The future expected through xDiversity activities.
For example, if people with physical disabilities are able to move around freely and enjoy sports, or if people who are deaf and those who can hear are able to communicate with each other more smoothly, it not only helps people with disabilities, but also opens up new relationships and communication opportunities for people without disabilities. This will not only help people with disabilities, but will also open up new relationships and communication opportunities for people without disabilities, leading to the expansion of social resources.
How should we think about the social implementation of technology?
However, I believe that the future of technology should directly approach bridging the gap between people with and without disabilities.
Because just as deaf people have barriers in communicating with hearing people, hearing people have barriers in communicating with deaf people.
We should try our hand at technology, data, and machine learning to expand our possibilities.
Technology can be said to have been implemented in society only when we can use it easily.
xDiversity makes technology more accessible and palpable to us not only through research and development, but also through workshops and community design.
Through them, we can empathize with people who are facing challenges and think of solutions to those challenges through technology.
What we can do to realize a society in which each diverse individual is individually optimized is to mediate between technology and social implementation.
In order to implement technology in society, we found that it is important to make the research and development of the technology participatory and to create opportunities for people to experience the technology through workshops.
xDiversity is a combination of research and development and community design.
I would like to continue to support them and learn from them.
Reference
White Paper on Ageing Society 2020 (Japanese Only)
https://www8.cao.go.jp/kourei/whitepaper/w-2020/html/zenbun/s1_1_1.htmlxDiversity
https://xdiversity.org/en/JST CREST
https://www.jst.go.jp/kisoken/crest/en/Tekewheelchair
https://digitalnature.slis.tsukuba.ac.jp/2017/03/telewheelchair/
Air Mount Retinal Projector
https://pixiedusttech.com/technologies/air-mount-retinal-projector/
Ontena
Xiborg (Japanese Only)
Holographic Whisper
https://pixiedusttech.com/technologies/holographic-whisper/
m5stackC
https://shop.m5stack.com/products/stick-c?variant=17203451265114
Google Teachable Machine
https://teachablemachine.withgoogle.com/Thanks for reading.
July 25, 2021, written by Masa - Focus on the interaction.
人気の投稿
Avatar Robot Cafe DAWN ver.β, telepresence robots will create a society where everyone can participate in social relationships.
- リンクを取得
- ×
- メール
- 他のアプリ
Manga is a composite art that can depict the future. | The Way of the Mangaka.
- リンクを取得
- ×
- メール
- 他のアプリ







コメント
コメントを投稿