Driven by the wave of the new technologies such as 5G,AI,VR and AR, the Digital Human Technology become the core demand of the digital age. The Digital Human Technology of the Dream Wld Tech Co., Ltd is the industry’s first artificial intelligence algorithm technology based on ordinary camera to achieve the same effect as a million-yuan capture device, and has the disruptive lead.
The artificial intelligence digital human drive engine Cybatar researched and developed by the Dream Wld Tech Co., Ltd has the world’s leading digital human technology, including realism-level facial capture technology: it uses only ordinary camera and doesn’t need to attach optical capture points to achieve accurate facial expression capture, it can accurately restore 300+ micro expressions. Motion capture doesn’t need to wear any optical capture equipment, and also only requires a ordinary camera to capture the movement of every delicate joint of the target character, so that any gesture can be accurately restored to the digital human hand.
Artificial intelligence digital humans are widely used, including virtual live broadcast, live broadcast commodity delivery, online education, short video, VR/AR, financial fields, online customer service, project endorsements, web drama production and other different fields. They have very high commercial value and use technology. Accelerate the development of culture.
How to Use Digital Technology to Tell Story
The digital human technology researched and developed by Dream Wld Tech Co., Ltd is the industry’s first artificial intelligence algorithm technology based on ordinary cameras to achieve the effect of capturing equipment worth million yuan.So compared with traditional capture equipment, our products have the following advantages:
1.Super realistic and high-precision professional film level capture effects.
2.No hardware investment cost, only need a Mid-to-high-end computers, no need to purchase hardware equipment with a price of hundreds to thousands yuan.
3.The use cost is extremely low, only relying on ordinary cameras, anyone can wear any clothing, When you use the Cybatar, you don’t need to stick points, you don’t need to customize for the controller, you also don’t need to calibrate before use the product.
4.Realize more possibilities/innovative production methods, you can shoot videos with a mobile phone first, and then capture the digital human by the data of the videos. In the Cybatar, you only need to choose the character and import the video; so in this solution, the person who drive the digital human don’t need to go to the studio.
5.Video/Animation production efficiency is double improved, compared with traditional production process, efficiency is increased by 50 times.
How to Enhance User Experience
From the experience process, the artificial intelligence digital human-driven engine Cybatar, greatly reduce the use cost of capturing. Using the Cybatar, only need an animator computer and a normal camera, without wearing a motion capture suit and face-catching helmet, and also without sticking points on face, which saves a lot of time.
Cybatar has truly realized the quality of PGC by using UGC scenes, so that anyone can drive virtual digital humans in any clothes. In addition, Cybatar is also a powerful animation production software. We think that there are many animation creators with a high level of innovation in the market, due to the difficulty and time-consuming by using the traditional animation software such as maya. So they can only import text. But through the Cybatar, the creators can produce the animation with good effect by lower cost and higher efficiently, which reduces the threshold of animation production. Many videos do not need cost a lot of time to produce by animator, just by capturing, you can present a video clip.
Potential Market Application: Virtual digital human have many application scenarios in the market，and they can also help enterprises and users create great value. Specifically, the following three application scenarios are used for example analysis:
Virtual live broadcast
Users can realize live broadcast in a video-driven manner through software, and the facial expressions and actions of real users can be accurately presented on virtual digital humans. When this image becomes IP, the commercial value will be doubled. It will not only drive digital people to interact with fans in real time, but also participate in the live broadcast and realize the flow of traffic.
Traditional animation production is not only time-consuming but also costly. Even if it takes a long time for multiple animators to complete, the final effect will inevitably be unsatisfactory. However, by playing ape in cyberspace, digital humans can be directly driven by video, and a good-effect animation video can be output through its own performance with the powerful production functions of the software.
Compared with traditional social networking, we believe that virtual social networking has very large market prospects, which is equivalent to that everyone can drive a virtual digital human role of their own, experience different things in a virtual world, we can go to virtual classrooms and play games , You can conduct meetings and work, and you can watch a movie. Just like the oasis of "Ready Player One", everything will become beyond reality and incredible.
In addition, virtual digital people can also be used in virtual customer service, virtual idols, virtual theaters, VR, AR, games and other scenarios to continue to create user value.