Doodle Tales is an innovative storytelling app that blends the creative fun of Mad Libs with the excitement of "choose your own adventure" books, along with games like ScribbleNauts for kids.
It includes prompts for drawing, along with adjective, verb, and scene selection.
Every choice they make alters the course of the story, ensuring a unique and engaging adventure every time!
My initial sketches had a large focus on AI recognition tools to enhance pre-existing learning styles. I wanted to create something that went beyond the screen. As a hands-on learner myself, I kept thinking,
"What's something that I wish I had growing up?"
As a result, I found that I wanted to explore the idea of learning from real world objects, in conjunction with our own environment. These ideas heavily relied on the user having a digital device to scan objects, and interact with the app based on that through prompts or text.
Out of my group's collective sketches, we went down to 1 idea from each member. Britney's "Soccer Aiming Game", and Alokik's "Number Shooting Game" were the runner-ups, but we finally settled to explore the two remaining ideas:
Test Maker - (mine)
Diary Book - Ariel
We gave each other a few days, and re-convened on a Friday night to share our refinements of these two ideas. My groupmate's sketches are below.
Test Maker - Britney
Kid's Diary Book - Ariel
Kid's Diary Book - Alokik
These are great sketches, don't get me wrong. But the best thing about Human-Computer Interaction is that, code isn't the only tool at your disposal. I felt that we had started to box ourselves in. So, I pushed for us to explore outside the details of the button placement, text font, etc.
Ariel's "Diary Book" idea had one point that really resonated with me - the idea of having a kid say something. I wanted to explore the pre-existing mediums of drawing and writing, and using new interpretation tools to help kids further develop their message.
After showing off Google's Quick Draw and Stable Doodle, I proposed a last minute turn in concept. I'm lucky that my team went along with it. From there, we went from the idea of making songs, games, and finally, stories.
The site was created using HTML, JavaScript, and CSS on Glitch. We were able to implement these additional features as well:
Text to Speech
Album View
These AI tools were used in the final implementation:
Stable Diffusion XL - Image Generation
OpenAI - Story Generation
ResponsiveVoice - Text to Speech
We found a lot of AI tools that were exciting - and we tested out a various number before settling on Stable Diffusion XL and OpenAI. We focused heavily on idea exploration by giving kids enhanced versions of their drawings and prompts, and I feel that SDXL especially brings this to fruition. The other available AI I found has a world of potential, and I'm excited to see where it takes us.
"tiger swimming in the dark jungle"
"dog watching the sunset"
I was responsible for the overall design and direction of the project, along with its AI choice. I brought SDXL, Google Quick Draw, and many other tools to the table - all inspired from the games that I loved growing up. I asked my group members for some screen recordings using the app, and supplemented them with my own to create enough footage to edit and create the video. Outside of the administrative tasks, I mainly did visual refinements for the site. Namely, the JS and CSS for the title page, and a few animations between the others. On top of this, I did some fine tuning for the layouts and buttons on the following pages.