营销日历 营销导航 热门搜索 使用技巧
广告营销案例

    您的体验已到期

    免费领取会员>

    本案例默认翻译为中文,点击可切换回原语言

    已切换成原语言,点击可翻译成中文

    符号

    案例简介:为什么这项工作与创新相关? SIGNS是全球首个针对听力损失人群的智能语音助手解决方案。这是一个创新的智能工具,可以实时识别和翻译手语,然后直接与选定的语音助理服务 (e g. Amazon Alexa、Google Assistant或Microsoft Cortana)。迹象正在重塑声音 -- 一次一个手势。这个智能工具是今天和将来与语音助手进行所有非语言交流的界面。 背景 标志基于智能机器学习框架 (Google TensorFlow),该框架通过集成摄像头的帮助来识别身体手势。这些手势被转换为所选语音助理服务可以理解的数据格式。目前,通过在亚马逊Alexa上提供一些最常用的命令,标牌可以将听力受损者和语音助手聚集在一起,例如,它可以将牛奶放在购物清单上,它可以改变智能家居灯的颜色,并且可以显示天气。总之,它理解德国手语中有限的一组标志。然而,它可以在几分钟内学习一组新的标志,并且可以适应任何手语。 描述这个想法 全球有超过 20亿个支持语音的设备。语音助手正在改变我们购物、搜索、交流甚至生活的方式。至少对大多数人来说。但是那些没有声音的呢?那些听不见的人呢?根据世界卫生组织的数据,全世界大约有 4.66亿人有致残性听力损失。开发项目标志是为了提高人们对融入数字时代的认识,并促进获得新技术。许多听力损失的人用手说话。这是他们的自然语言。他们的手是他们的声音。然而,语音助理使用自然语言处理来破译并仅对可听见的命令做出反应。没有声音意味着没有反应。SIGNS原型通过识别手势直接与现有的语音助理服务 (如亚马逊Alexa、谷歌主页或微软Cortana) 进行通信,弥合了聋人和语音助理之间的差距。 开发过程中的关键日期是什么? 06/18 - 07/18 构思/概念化 07/18 - 08/18 基于红外摄像头和手势识别的体验原型开发 08/18 - 09/18 体验测试 10/18 - 12/18 基于RGB摄像头的下一代全身手势识别原型概念化 对聋人参与者进行 01/19 次访谈/用户验收测试 01/19 - 04/19 下一代样机的设计与开发 04/19 法兰克福会话式设计活动的标志首映式 04/19 - 05/19 聋人参与者焦点小组的经验测试 描述创新/技术 SIGNS使用集成摄像头实时识别手语,并直接与语音助手通信。该系统基于机器学习框架Google Tensorflow。预先训练的MobileNet的结果用于在手势上训练几个KNN分类器。识别计算网络摄像头录制的手势的可能性并转换为文本。生成的句子被翻译成传统语法,并发送到基于云的服务,从中生成语言。换句话说,手势被转换成所选语音助理理解的数据格式 (文本到语音)。在本例中,显示了Amazon Voice Service (AVS)。AVS使用元数据和音频数据进行响应,这些数据反过来从云服务转换为文本 (文本到语音)。将显示结果。SIGNS适用于任何基于浏览器的操作系统,该操作系统具有集成摄像头,并且可以连接到语音助手。 描述期望/结果 目标是让所有助手和所有听力受损者都能看到标志。在第一个版本中,Q4 SIGNS将在Windows和MacOS上推出Amazon Alexa连接和有限手势。到Q2/20,谷歌助手的连接器计划,Q3/20 微软Cortana。到第四季度/20 季度,将会有一个基于人群的字典,社区可以通过它来贡献词汇。根据Gartner的数据,到 2020 年,30% 的数字交互将是非屏幕交互。就像声音一样,手势是一种直观的交流方式,使其与行业极其相关。不仅仅是为了听力受损的人,也是为了每个人。人们认为在公共场合和无形的人说话很尴尬,这就是为什么我们认为与数字世界的无形对话不仅限于声音本身。

    符号

    案例简介:Why is this work relevant for Innovation? SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. This smart tool is the interface for all non-verbal communication with voice assistants – today and in the future. Background SIGNS is based on an intelligent machine learning framework (Google TensorFlow) that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the selected voice assistant service understands. For now, SIGNS can bring hearing impaired people and voice assistants together by offering some of the most used commands on Amazon Alexa, for example, it can put milk on the shopping list, it can change the color of smart home lights, and it can show the weather. All in all, it understands a limited set of signs in German sign language. However, it can learn a new set of signs in only minutes and can be adapted to any sign language. Describe the idea There are over 2 billion voice-enabled devices across the globe. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? According to the World Health Organization around 466 million people worldwide have disabling hearing loss. Project SIGNS was developed to create awareness for inclusion in the digital age as well as to facilitate access to new technologies. Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. The SIGNS prototype bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). What were the key dates in the development process? 06/18 – 07/18 Ideation / Conceptualization 07/18 – 08/18 Development of an experience prototype based on infra-red camera and hand gesture recognition 08/18 – 09/18 Experience Testing 10/18 – 12/18 Conceptualization for the next-gen prototype with full body gesture recognition based on RGB camera 01/19 Interviews / user acceptance testing with deaf participants 01/19 – 04/19 Design & Development of the next-gen prototype 04/19 Premiere of SIGNS at the Conversational Design Event Frankfurt 04/19 – 05/19 Experience Testing with a focus group of deaf participants Describe the innovation/technology SIGNS uses an integrated camera to recognize sign language in real-time and communicates directly with a voice assistant. The system is based on the machine learning framework Google Tensorflow. The result of the pre-trained MobileNet is used to train several KNN classifiers on gestures. The recognition calculates the likelihood of the webcam's recorded gestures and converts into text. The resulting sentences are translated into conventional grammar and sent to a cloud-based service that generates language from it. In other words, the gestures are converted into a data format (text to speech) that the selected voice assistant understands. In this case, shown Amazon Voice Service (AVS). AVS responds with meta and audio data, which in turn is converted from a cloud service to text (text to speech). The result is displayed. SIGNS works on any browser-based operating system that has an integrated camera and can be connected to a voice assistant. Describe the expectations/outcome The goal is to make SIGNS available on all assistants and to all hearing-impaired people. In the first release, Q4 SIGNS will launch on Windows and MacOS with Amazon Alexa connectivity and limited gestures. By Q2 / 20, the connector to Google Assistant is planned, Q3 / 20 Microsoft Cortana. By Q4 / 20 there will be a crowd-based dictionary with which the community can contribute to the vocabulary. According to Gartner 30% of all digital interactions will be non-screen based by 2020. Just like voice, gestures are an intuitive way of communicating, making it extremely relevant for the industry. Not just for the hearing impaired, but for everyone. People think it is awkward to speak to the invisible in public, that’s why we believe that invisible conversational interactions with the digital world are not limited to voice itself.

    Signs

    案例简介:为什么这项工作与创新相关? SIGNS是全球首个针对听力损失人群的智能语音助手解决方案。这是一个创新的智能工具,可以实时识别和翻译手语,然后直接与选定的语音助理服务 (e g. Amazon Alexa、Google Assistant或Microsoft Cortana)。迹象正在重塑声音 -- 一次一个手势。这个智能工具是今天和将来与语音助手进行所有非语言交流的界面。 背景 标志基于智能机器学习框架 (Google TensorFlow),该框架通过集成摄像头的帮助来识别身体手势。这些手势被转换为所选语音助理服务可以理解的数据格式。目前,通过在亚马逊Alexa上提供一些最常用的命令,标牌可以将听力受损者和语音助手聚集在一起,例如,它可以将牛奶放在购物清单上,它可以改变智能家居灯的颜色,并且可以显示天气。总之,它理解德国手语中有限的一组标志。然而,它可以在几分钟内学习一组新的标志,并且可以适应任何手语。 描述这个想法 全球有超过 20亿个支持语音的设备。语音助手正在改变我们购物、搜索、交流甚至生活的方式。至少对大多数人来说。但是那些没有声音的呢?那些听不见的人呢?根据世界卫生组织的数据,全世界大约有 4.66亿人有致残性听力损失。开发项目标志是为了提高人们对融入数字时代的认识,并促进获得新技术。许多听力损失的人用手说话。这是他们的自然语言。他们的手是他们的声音。然而,语音助理使用自然语言处理来破译并仅对可听见的命令做出反应。没有声音意味着没有反应。SIGNS原型通过识别手势直接与现有的语音助理服务 (如亚马逊Alexa、谷歌主页或微软Cortana) 进行通信,弥合了聋人和语音助理之间的差距。 开发过程中的关键日期是什么? 06/18 - 07/18 构思/概念化 07/18 - 08/18 基于红外摄像头和手势识别的体验原型开发 08/18 - 09/18 体验测试 10/18 - 12/18 基于RGB摄像头的下一代全身手势识别原型概念化 对聋人参与者进行 01/19 次访谈/用户验收测试 01/19 - 04/19 下一代样机的设计与开发 04/19 法兰克福会话式设计活动的标志首映式 04/19 - 05/19 聋人参与者焦点小组的经验测试 描述创新/技术 SIGNS使用集成摄像头实时识别手语,并直接与语音助手通信。该系统基于机器学习框架Google Tensorflow。预先训练的MobileNet的结果用于在手势上训练几个KNN分类器。识别计算网络摄像头录制的手势的可能性并转换为文本。生成的句子被翻译成传统语法,并发送到基于云的服务,从中生成语言。换句话说,手势被转换成所选语音助理理解的数据格式 (文本到语音)。在本例中,显示了Amazon Voice Service (AVS)。AVS使用元数据和音频数据进行响应,这些数据反过来从云服务转换为文本 (文本到语音)。将显示结果。SIGNS适用于任何基于浏览器的操作系统,该操作系统具有集成摄像头,并且可以连接到语音助手。 描述期望/结果 目标是让所有助手和所有听力受损者都能看到标志。在第一个版本中,Q4 SIGNS将在Windows和MacOS上推出Amazon Alexa连接和有限手势。到Q2/20,谷歌助手的连接器计划,Q3/20 微软Cortana。到第四季度/20 季度,将会有一个基于人群的字典,社区可以通过它来贡献词汇。根据Gartner的数据,到 2020 年,30% 的数字交互将是非屏幕交互。就像声音一样,手势是一种直观的交流方式,使其与行业极其相关。不仅仅是为了听力受损的人,也是为了每个人。人们认为在公共场合和无形的人说话很尴尬,这就是为什么我们认为与数字世界的无形对话不仅限于声音本身。

    Signs

    案例简介:Why is this work relevant for Innovation? SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. This smart tool is the interface for all non-verbal communication with voice assistants – today and in the future. Background SIGNS is based on an intelligent machine learning framework (Google TensorFlow) that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the selected voice assistant service understands. For now, SIGNS can bring hearing impaired people and voice assistants together by offering some of the most used commands on Amazon Alexa, for example, it can put milk on the shopping list, it can change the color of smart home lights, and it can show the weather. All in all, it understands a limited set of signs in German sign language. However, it can learn a new set of signs in only minutes and can be adapted to any sign language. Describe the idea There are over 2 billion voice-enabled devices across the globe. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? According to the World Health Organization around 466 million people worldwide have disabling hearing loss. Project SIGNS was developed to create awareness for inclusion in the digital age as well as to facilitate access to new technologies. Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. The SIGNS prototype bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). What were the key dates in the development process? 06/18 – 07/18 Ideation / Conceptualization 07/18 – 08/18 Development of an experience prototype based on infra-red camera and hand gesture recognition 08/18 – 09/18 Experience Testing 10/18 – 12/18 Conceptualization for the next-gen prototype with full body gesture recognition based on RGB camera 01/19 Interviews / user acceptance testing with deaf participants 01/19 – 04/19 Design & Development of the next-gen prototype 04/19 Premiere of SIGNS at the Conversational Design Event Frankfurt 04/19 – 05/19 Experience Testing with a focus group of deaf participants Describe the innovation/technology SIGNS uses an integrated camera to recognize sign language in real-time and communicates directly with a voice assistant. The system is based on the machine learning framework Google Tensorflow. The result of the pre-trained MobileNet is used to train several KNN classifiers on gestures. The recognition calculates the likelihood of the webcam's recorded gestures and converts into text. The resulting sentences are translated into conventional grammar and sent to a cloud-based service that generates language from it. In other words, the gestures are converted into a data format (text to speech) that the selected voice assistant understands. In this case, shown Amazon Voice Service (AVS). AVS responds with meta and audio data, which in turn is converted from a cloud service to text (text to speech). The result is displayed. SIGNS works on any browser-based operating system that has an integrated camera and can be connected to a voice assistant. Describe the expectations/outcome The goal is to make SIGNS available on all assistants and to all hearing-impaired people. In the first release, Q4 SIGNS will launch on Windows and MacOS with Amazon Alexa connectivity and limited gestures. By Q2 / 20, the connector to Google Assistant is planned, Q3 / 20 Microsoft Cortana. By Q4 / 20 there will be a crowd-based dictionary with which the community can contribute to the vocabulary. According to Gartner 30% of all digital interactions will be non-screen based by 2020. Just like voice, gestures are an intuitive way of communicating, making it extremely relevant for the industry. Not just for the hearing impaired, but for everyone. People think it is awkward to speak to the invisible in public, that’s why we believe that invisible conversational interactions with the digital world are not limited to voice itself.

    符号

    暂无简介

    Signs

    暂无简介

    基本信息

    综合评分
    {{getNumber(caseInfo.whole)}}

    暂无评分

    已有{{caseInfo.tatolPeople}}人评分

    创意
    {{getNumber(caseInfo.originality)}}
    文案
    {{getNumber(caseInfo.copywriting)}}
    视觉
    {{getNumber(caseInfo.visualEffect)}}

    案例详情

    涵盖全球100万精选案例,涉及2800个行业,包含63000个品牌

    热门节日97个,23个维度智能搜索

    • 项目比稿

      品类案例按时间展现,借鉴同品牌策略,比稿提案轻松中标

    • 创意策划

      任意搜索品牌关键词,脑洞创意策划1秒呈现

    • 竞品调研

      一键搜索竞品往年广告,一眼掌握对手市场定位

    • 行业研究

      热词查看洞悉爆点,抢占行业趋势红利

    登录后查看全部案例信息

    如果您是本案的创作者或参与者 可对信息进行完善

    案例评分

    综合
    {{wholeEm}} 请评分
    创意
    {{originalityEm}} 请评分
    文案
    {{copywritingEm}} 请评分
    动视
    {{visualEffectEm}} 请评分

    链接粘贴成功,ctrl+v 进行复制

    完善信息

    最多可填写1000个字符

    请填写正确的邮箱

    完善信息成功

    完善信息失败

    评分成功

    您已经完成过对该案例的评分了

    联系我们 返回广告案例顶部 分享广告案例 意见反馈 广告案例意见反馈 回到顶部 返回广告案例顶部

    链接粘贴成功,ctrl+v 进行复制

    扫码关注公众号完成登录

    登录即视为同意《用户协议》

    二维码失效

    刷新

    注册成功,赠送你10天会员体验权

    注册失败,请检查信息后重新输入