本案例默认翻译为中文,点击可切换回原语言
已切换成原语言,点击可翻译成中文
标志
案例简介:为什么这项工作与品牌体验和激活相关? SIGNS是全球第一个针对听力损失人群的智能语音助手解决方案。这是一种创新的智能工具,可以实时识别和翻译手语,然后直接与选定的语音助手服务 (例如Amazon Alexa,Google assistant或Microsoft Cortana) 进行通信。迹象正在重塑语音-一次一个手势。许多听力损失的人用手说话。这就是他们需要与标志交谈的全部。明天天气怎么样?将灯改为蓝色。找一家意大利餐馆。只要说话,迹象就会回答。 背景 全球有超过20亿种支持语音的设备。语音助手正在改变我们购物,搜索,交流甚至生活的方式。至少对大多数人来说。但是那些没有声音的人呢?那些听不见的人呢?根据世界卫生组织的数据,全世界约有4.66亿人患有致残性听力损失。开发项目标志是为了提高人们对数字时代的包容性的认识,并促进对新技术的获取。 描述创意 (投票20%) 语音助手正在改变我们购物,搜索,交流甚至生活的方式。至少对大多数人来说。但是那些没有声音的人呢?那些听不见的人呢?全球约有4.66亿人患有致残性听力损失。通过SIGNS项目,我们正在提高对数字可访问性和包容性的认识。SIGNS是全球第一个针对听力损失人群的智能语音助手解决方案。这是一种创新的智能工具,可以实时识别和翻译手语,然后直接与选定的语音助手服务 (例如Amazon Alexa,Google assistant或Microsoft Cortana) 进行通信。迹象正在重塑语音-一次一个手势。许多听力损失的人用手说话。这就是他们需要与标志交谈的全部。明天天气怎么样?将灯改为蓝色。找一家意大利餐馆。只要说话,迹象就会回答。 描述策略 (投票20%) 许多听力损失的人用手说话。这是他们的自然语言。他们的手是他们的声音。但是,语音助手使用自然语言处理来解密并仅对可听见的命令做出反应。没有声音意味着没有反应。通过识别与现有语音助手服务 (例如Amazon Alexa,Google Home或Microsoft Cortana) 直接通信的手势,标志弥合了聋人与语音助手之间的鸿沟。 描述执行 (投票30%) SIGNS使用集成的摄像头实时识别手语,并直接与语音助手通信。该系统基于机器学习框架Google Tensorflow。预先训练的MobileNet的结果用于训练手势上的几个KNN分类器。识别计算网络摄像头记录手势的可能性,并转换为文本。生成的句子被翻译成常规语法,并发送到基于云的服务,该服务从中生成语言。换句话说,手势被转换为所选语音助手理解的数据格式 (文本到语音)。在这种情况下,显示了亚马逊语音服务 (AVS)。AVS使用元数据和音频数据进行响应,这些数据又从云服务转换为文本 (文本到语音)。显示结果。SIGNS适用于任何具有集成摄像头并可以连接到语音助手的基于浏览器的操作系统。 列出结果 (投票30%) 项目标志提高了人们对融入数字时代的意识,并促进了对新技术的获取。聋人社区的反应是压倒性的。就像声音一样,手势是一种直观的交流方式,使其与行业极为相关。不仅仅是为了听力受损的人,也是为了每个人。人们认为在公共场合与看不见的人说话很尴尬,这就是为什么我们认为与数字世界的无形对话互动不仅限于声音本身。此外,我们与德国听力损失人士青年协会作为合作伙伴开始了合作,并扩展了可用性。手语助手从未以这种质量推出,并有望成为全球平台,可以从世界各地轻松访问,学习新的标志和手语。
标志
案例简介:Why is this work relevant for Brand Experience & Activation? SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. Background There are over 2 billion voice-enabled devices across the globe. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? According to the World Health Organization around 466 million people worldwide have disabling hearing loss. Project SIGNS was developed to create awareness for inclusion in the digital age as well as to facilitate access to new technologies. Describe the creative idea (20% of vote) Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? Around 466 million people worldwide have disabling hearing loss. With the SIGNS Project, we are creating awareness for digital accessibility and inclusion. SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. Describe the strategy (20% of vote) Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). Describe the execution (30% of vote) SIGNS uses an integrated camera to recognize sign language in real-time and communicates directly with a voice assistant. The system is based on the machine learning framework Google Tensorflow. The result of the pre-trained MobileNet is used to train several KNN classifiers on gestures. The recognition calculates the likelihood of the webcam's recorded gestures and converts into text. The resulting sentences are translated into conventional grammar and sent to a cloud-based service that generates language from it. In other words, the gestures are converted into a data format (text to speech) that the selected voice assistant understands. In this case, shown Amazon Voice Service (AVS). AVS responds with meta and audio data, which in turn is converted from a cloud service to text (text to speech). The result is displayed. SIGNS works on any browser-based operating system that has an integrated camera and can be connected to a voice assistant. List the results (30% of vote) Project SIGNS created awareness for inclusion in the digital age as well as facilitated access to new technologies. The response of the deaf community was overwhelming. Just like voice, gestures are an intuitive way of communicating, making it extremely relevant for the industry. Not just for the hearing impaired, but for everyone. People think it is awkward to speak to the invisible in public, that’s why we believe that invisible conversational interactions with the digital world are not limited to voice itself. Further we started a cooperation with the German Youth Association of People with Hearing Loss as a partner and extended the usability. Never before a sign language assistant was launched in that quality and with the prospect of becoming a worldwide platform, that can easily be accessible from all over the world, learning new signs and sign languages.
Signs
案例简介:为什么这项工作与品牌体验和激活相关? SIGNS是全球第一个针对听力损失人群的智能语音助手解决方案。这是一种创新的智能工具,可以实时识别和翻译手语,然后直接与选定的语音助手服务 (例如Amazon Alexa,Google assistant或Microsoft Cortana) 进行通信。迹象正在重塑语音-一次一个手势。许多听力损失的人用手说话。这就是他们需要与标志交谈的全部。明天天气怎么样?将灯改为蓝色。找一家意大利餐馆。只要说话,迹象就会回答。 背景 全球有超过20亿种支持语音的设备。语音助手正在改变我们购物,搜索,交流甚至生活的方式。至少对大多数人来说。但是那些没有声音的人呢?那些听不见的人呢?根据世界卫生组织的数据,全世界约有4.66亿人患有致残性听力损失。开发项目标志是为了提高人们对数字时代的包容性的认识,并促进对新技术的获取。 描述创意 (投票20%) 语音助手正在改变我们购物,搜索,交流甚至生活的方式。至少对大多数人来说。但是那些没有声音的人呢?那些听不见的人呢?全球约有4.66亿人患有致残性听力损失。通过SIGNS项目,我们正在提高对数字可访问性和包容性的认识。SIGNS是全球第一个针对听力损失人群的智能语音助手解决方案。这是一种创新的智能工具,可以实时识别和翻译手语,然后直接与选定的语音助手服务 (例如Amazon Alexa,Google assistant或Microsoft Cortana) 进行通信。迹象正在重塑语音-一次一个手势。许多听力损失的人用手说话。这就是他们需要与标志交谈的全部。明天天气怎么样?将灯改为蓝色。找一家意大利餐馆。只要说话,迹象就会回答。 描述策略 (投票20%) 许多听力损失的人用手说话。这是他们的自然语言。他们的手是他们的声音。但是,语音助手使用自然语言处理来解密并仅对可听见的命令做出反应。没有声音意味着没有反应。通过识别与现有语音助手服务 (例如Amazon Alexa,Google Home或Microsoft Cortana) 直接通信的手势,标志弥合了聋人与语音助手之间的鸿沟。 描述执行 (投票30%) SIGNS使用集成的摄像头实时识别手语,并直接与语音助手通信。该系统基于机器学习框架Google Tensorflow。预先训练的MobileNet的结果用于训练手势上的几个KNN分类器。识别计算网络摄像头记录手势的可能性,并转换为文本。生成的句子被翻译成常规语法,并发送到基于云的服务,该服务从中生成语言。换句话说,手势被转换为所选语音助手理解的数据格式 (文本到语音)。在这种情况下,显示了亚马逊语音服务 (AVS)。AVS使用元数据和音频数据进行响应,这些数据又从云服务转换为文本 (文本到语音)。显示结果。SIGNS适用于任何具有集成摄像头并可以连接到语音助手的基于浏览器的操作系统。 列出结果 (投票30%) 项目标志提高了人们对融入数字时代的意识,并促进了对新技术的获取。聋人社区的反应是压倒性的。就像声音一样,手势是一种直观的交流方式,使其与行业极为相关。不仅仅是为了听力受损的人,也是为了每个人。人们认为在公共场合与看不见的人说话很尴尬,这就是为什么我们认为与数字世界的无形对话互动不仅限于声音本身。此外,我们与德国听力损失人士青年协会作为合作伙伴开始了合作,并扩展了可用性。手语助手从未以这种质量推出,并有望成为全球平台,可以从世界各地轻松访问,学习新的标志和手语。
Signs
案例简介:Why is this work relevant for Brand Experience & Activation? SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. Background There are over 2 billion voice-enabled devices across the globe. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? According to the World Health Organization around 466 million people worldwide have disabling hearing loss. Project SIGNS was developed to create awareness for inclusion in the digital age as well as to facilitate access to new technologies. Describe the creative idea (20% of vote) Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? Around 466 million people worldwide have disabling hearing loss. With the SIGNS Project, we are creating awareness for digital accessibility and inclusion. SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. Describe the strategy (20% of vote) Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). Describe the execution (30% of vote) SIGNS uses an integrated camera to recognize sign language in real-time and communicates directly with a voice assistant. The system is based on the machine learning framework Google Tensorflow. The result of the pre-trained MobileNet is used to train several KNN classifiers on gestures. The recognition calculates the likelihood of the webcam's recorded gestures and converts into text. The resulting sentences are translated into conventional grammar and sent to a cloud-based service that generates language from it. In other words, the gestures are converted into a data format (text to speech) that the selected voice assistant understands. In this case, shown Amazon Voice Service (AVS). AVS responds with meta and audio data, which in turn is converted from a cloud service to text (text to speech). The result is displayed. SIGNS works on any browser-based operating system that has an integrated camera and can be connected to a voice assistant. List the results (30% of vote) Project SIGNS created awareness for inclusion in the digital age as well as facilitated access to new technologies. The response of the deaf community was overwhelming. Just like voice, gestures are an intuitive way of communicating, making it extremely relevant for the industry. Not just for the hearing impaired, but for everyone. People think it is awkward to speak to the invisible in public, that’s why we believe that invisible conversational interactions with the digital world are not limited to voice itself. Further we started a cooperation with the German Youth Association of People with Hearing Loss as a partner and extended the usability. Never before a sign language assistant was launched in that quality and with the prospect of becoming a worldwide platform, that can easily be accessible from all over the world, learning new signs and sign languages.
标志
暂无简介
Signs
暂无简介
基本信息
- 广告战役: #German Youth Association Of People With Hearing Loss-影视-69c6#
- 广告品牌: German Youth Association Of People With Hearing Loss
- 发布日期: 2000
- 行业领域: 公益慈善 , 公共事业
- 媒体类别: 微电影
- 广告语言: 英语
- 媒介平台: 网络
- 获得奖项:
暂无评分
已有{{caseInfo.tatolPeople}}人评分
创作者
案例详情
涵盖全球100万精选案例,涉及2800个行业,包含63000个品牌
热门节日97个,23个维度智能搜索
-
项目比稿
品类案例按时间展现,借鉴同品牌策略,比稿提案轻松中标
-
创意策划
任意搜索品牌关键词,脑洞创意策划1秒呈现
-
竞品调研
一键搜索竞品往年广告,一眼掌握对手市场定位
-
行业研究
热词查看洞悉爆点,抢占行业趋势红利
登录后查看全部案例信息
如果您是本案的创作者或参与者 可对信息进行完善