Vignette is a face tracking software for characters using osu!framework.

Overview


Discord GitHub Super-Linter Total Download Count

Vignette is a face tracking software for characters using osu!framework. Unlike most solutions, Vignette is:

  • Made with osu!framework, the game framework that powers osu!lazer, the next iteration of osu!.
  • Open source, from the very core.
  • Always evolving - Vignette improves every update, and it tries to know you better too, literally.

Running

We provide releases from GitHub Releases and also from Visual Studio App Center. Vignette releases builds for a select few people before we create a release here, so pay attention.

You can also run Vignette by cloning the repository and running this command in your terminal.

dotnet run --project Vignette.Desktop

Developing

Please make sure you meet the prerequisites:

Contributing

The style guide is defined in the .editorconfig at the root of this repository and it will be picked up in intellisense by capable editors. Please follow the provided style for consistency.

License

Vignette is Copyright © 2020 Ayane Satomi and the Vignette Authors, licensed under GNU General Public License v3.0 with SDK exception. For the full license text please see the LICENSE file in this repository. Live2D however is also additionally under another license which can be found here: Live2D Open Software License.

Commercial Use and Support

While Vignette is GPL-3.0, We do not provide commercial support, there is nothing stopping you from using it commercially but if you want proper dedicated support from the Vignette engineers, we highly recommend the Enterprise tier on our Open Collective.

Comments
  • Refactor User Interface

    Refactor User Interface

    First and foremost, this is the awaited UI refresh which now sports a sidebar instead of a full screen menu. This also sports updated styling on several components and updates osu!framework and Fluent System Icons. Backdrops (backgrounds) get a significant update as well now allowing video and images as a target.

    Under the hood, I have refactored themeing and keybind management (UI is again to follow). Themes can now be edited on the fly but only the export button works. Applying live will follow. I've also laid down the foundation to avatar, recognition, and camera settings but only as hollow controls that don't do anything.

    priority:high area:user-interface 
    opened by LeNitrous 18
  • Refactor Vignette.Camera

    Refactor Vignette.Camera

    This PR fixes issue #234.

    The previous solution that I've implemented is simply not adding a duplicate item in the FluentDropdown, and warning about it with a console write statement. image image

    Now, the solution is indexing the friendly names so that all options pop up. We're now faced with a "can't open camera by index" bug.

    opened by Speykious 9
  • Allow osu!framework to not block compositing

    Allow osu!framework to not block compositing

    Desktop effects are killed globally when vignette is running; Some parts like disabling decorations are fine, but transparency, wobbly windows, smooth animations for actions, etc are all disabled as long as Vignette is running.

    proposal 
    opened by Martmists-GH 8
  • [NeedHelp]It crashed

    [NeedHelp]It crashed

    It crashed the first time I open it. I'm using windows7,service pack 1 dotnet x64 5.0.11.30524

    It happened like this in most cases QQ截图20220109124725

    And sometimes like this QQ截图20220109125148

    As far as I know,no logs/crash reports or dumps are created :( Can U help?

    invalid:wont-fix 
    opened by huzpsb 7
  • Vignette bundles the dotnet runtime

    Vignette bundles the dotnet runtime

    It seems the last issue went missing so I'm re-adding it.

    Reasons to bundle:

    • No need for end user to install it

    Reasons not to bundle:

    • User likely already has dotnet installed
    • Installer or install script can install it if missing
    • Prevent duplication of dependencies
    • Allow package manager (or user) to update dotnet with important fixes without the need for a new Vignette release
    • Some systems may need a custom patch to dotnet, which a bundled runtime would overwrite
    invalid:wont-fix 
    opened by Martmists-GH 6
  • Evaluate CNTK or Tensorflow for Tracking Backend

    Evaluate CNTK or Tensorflow for Tracking Backend

    Unfortunately, our Tracking Backend, which is FaceRecognitionDotNet, which uses DLib and OpenCV, didn't turn out as performant as expected. The delta is too high to make a significant data, and the models currently perform poorly. In light of that, I will have to make a backend we can control directly instead of relying on others' work which we're not sure that has any quality.

    Right now we're looking at CNTK and Tensorflow. While CNTK is from Microsoft, there is more laywork on Tensorflow, so we'll have to decide on this.

    proposal priority:high 
    opened by sr229 6
  • Use FFmpeg instead of EmguCV

    Use FFmpeg instead of EmguCV

    Currently, EmguCV is being used only to handle webcam input. We've had various problems with runtimes not being in the right place and cameras not being detected.

    Thus I propose that we use FFmpeg for that task. I think that it will be much easier to deal with as we can just use it as a system-installed binary. Not to mention that the library is LGPL which is just perfect for our use-case.

    priority:medium area:recognition 
    opened by Speykious 5
  • Lag Compensation for Prediction Data to Live2D

    Lag Compensation for Prediction Data to Live2D

    As part of #28, we have discussed how raw data would result on jittery rough data, even if the neural network used is theoretically as precise as a human eye predicting the facial movements of the subject. To compensate for jittery input, we will implement a sort of lag-compensation algorithm.

    Background

    John Carmack's work with Latency Mitigation for Virtual Reality Devices (source) explains that the physical movement from the user's head up to the eyes is critical to the experience. While the document is designed mainly for virtual reality, one can argue the methodologies used to provide a seamless experience for virtual reality can be applied for a face tracking application, as face tracking like HMDs, are also very demanding "human-in-the-loop" interfaces.

    Byeong-Doo Choi, et al.'s work with frame interpolation using a novel algorithm for motion prediction would enhance a target video's temporal resolution, by using Adaptive OBMC. Such frame interpolation techniques according to the paper has been proven to give better results than the current algorithms used for frame interpolation in the market.

    Strategy

    As stated on the background, there are many strategies we can perform lag compensation for such raw jittery input from prediction data from the neural network, it is limited to these two strategies:

    Frame Interpolation by Motion Prediction

    Byeong Doo-Choi, et al. achieves frame interpolation by the following:

    First, we propose the bilateral motion estimation scheme to obtain the motion field of an interpolated frame without yielding the hole and overlapping problems. Then, we partition a frame into several object regions by clustering motion vectors. We apply the variable-size block MC (VS-BMC) algorithm to object boundaries in order to reconstruct edge information with a higher quality. Finally, we use the adaptive overlapped block MC (OBMC), which adjusts the coefficients of overlapped windows based on the reliabilities of neighboring motion vectors. The adaptive OBMC (AOBMC) can overcome the limitations of the conventional OBMC, such as over-smoothing and poor de-blocking

    According to their experiments, such method would produce better image quality for the interpolated frames, which is helpful for prediction in our neural network, however it comes with a cost of having to process the video at runtime, as the experiment is only done on pre-rendered video frames already.

    View Bypass/Time Warping

    John Carmack's work with reducing input latency for VR HMDs suggests a multitude of methods, one of them is View Bypass - a method achieved by getting a newer sampling of the input.

    To achieve this, the input should be sampled once but can be used by both the simulation and the rendering task, thus reducing the latency for such. However, the input and the game thread must run in parallel and the programmer must be careful not to reference the game state otherwise it would cause a race condition.

    Another method mentioned by Carmack is Time Warping, which he states that:

    After drawing a frame with the best information at your disposal, possibly with bypassed view parameters, instead of displaying it directly, fetch the latest user input, generate updated view parameters, and calculate a transformation that warps the rendered image into a position that approximates where it would be with the updated parameters. Using that transform, warp the rendered image into an updated form on screen that reflects the new input. If there are two dimensional overlays present on the screen that need to remain fixed, they must be drawn or composited in after the warp operation, to prevent them from incorrectly moving as the view parameters change.

    There are different methods of warping which is forward warping and reverse warping, and such warping methods can be used along with View Bypassing. However, the increased complexity for lag compensation of doing input with the main loop concurrently is possible as the input loop will be independent of the game state entirely.

    Conclusion

    Such strategies mentioned would allow us to have smoother experience, however, based on my personal analysis, I found that Carmack's solutions would be more feasible for a project of our scale. We simply don't have the team and the technical resources to do from-camera video interpolation as it would be computationally expensive to be implemented with minimal overhead.

    area:documentation proposal priority:high 
    opened by sr229 5
  • Hook up Tracking Worker to Live2D

    Hook up Tracking Worker to Live2D

    As the final task for Milestone 1, we're going to hook up the tracking worker to Live2D and see if we can spot some bugs before we turn in on our release.

    proposal priority:high 
    opened by sr229 5
  • User Inteface

    User Inteface

    We want to customize the Layout, and to do that we need to do the following:

    • Make the Live2D a draggable component
    • Custom Backgrounds (Green Screen default, white default background, or Image).
    • Persist this layout into a format (YAML, perhaps?)

    Todo

    • [ ] Draggable and resizable Live2D container.
    • [ ] Backgrounds support (White background, Green background, user-defined).

    Essentially, since we're going to have a layout similar to this:

    image

    proposal priority:high 
    opened by sr229 5
  • Extension System

    Extension System

    Discussed in https://github.com/vignetteapp/vignette/discussions/216

    Originally posted by sr229 May 9, 2021 This has been requested by the community; however, this is kinda low priority as we focus most on the core components. The way this works is the following:

    • Extensions can expose their settings in MainMenu.
    • They will be strictly be conformant to the o!f model to properly load. This is considered "bare minimum" for what people requires to make an extension.
    • They will be packaged as either a .dll or a .nupkg which the program can "extract" or "compile" into a DLL, something we can do once we have a better idea with how to dynamically load assemblies.

    Anyone can propose a better design here since this is a RFC, we appreciate alternative approaches for this.

    priority:high 
    opened by sr229 4
  • UI controls, sprites, containers, etc as a Nuget package.

    UI controls, sprites, containers, etc as a Nuget package.

    It would be a nice idea if you could make a seperate library that includes all the UI controls, themeable sprite, containers, etc as a nuget package. It could allow other developers to integrate it to their projects and have access to a nice suite of UI controls + other stuff instead of writing them from scratch.

    priority:high area:user-interface 
    opened by Whatareyoulaughingat 6
  • VRM Support

    VRM Support

    Here's a little backlog while we're working on the rendering/scene/model API for the extensions. Since this is a reference implementation for all 3D/2D model support extensions, VRM is going to be our flagship extension and will serve as a extension reference for model support.

    References

    proposal priority:high area-extensions 
    opened by sr229 0
  • Steamworks API integration

    Steamworks API integration

    As part of #251, we might want to include Steamworks API just in case people might have a use for it on our Steam releases. It would be optional and will be hidden under a build flag.

    proposal priority:medium 
    opened by sr229 2
  • First time user experience (OOBE)

    First time user experience (OOBE)

    Design specifications are now released for the first time user experience. This will guide them to set up the bare essentials so they can get up and running quickly.

    priority:medium area:user-interface 
    opened by sr229 0
  • Internalization Support (i18n)

    Internalization Support (i18n)

    We'll have to support multiple languages. A good start is looking at Crowdin as a source. We'll support languages by demand but for starters I think we'll support English, Japanese, and Chinese (Simplified and Traditional) given we have people proficient in those languages.

    As for implementation, That would be the second part of investigation.

    good first issue priority:low 
    opened by LeNitrous 13
  • Documentation Tasks

    Documentation Tasks

    We'll have to document more significant parts at some point. We'd want contributors to have an idea how everything works in the back-end after all.

    For now we can direct them to osu!framework's Getting Started wiki pages.

    area:documentation good first issue priority:low 
    opened by LeNitrous 0
Releases(2021.1102.2)
Owner
Vignette
The open source VTuber Toolkit. Made with 💖.
Vignette
(EI 2022) Controllable Confidence-Based Image Denoising

Image Denoising with Control over Deep Network Hallucination Paper and arXiv preprint -- Our frequency-domain insights derive from SFM and the concept

Images and Visual Representation Laboratory (IVRL) at EPFL 5 Dec 18, 2022
Fuse radar and camera for detection

SAF-FCOS: Spatial Attention Fusion for Obstacle Detection using MmWave Radar and Vision Sensor This project hosts the code for implementing the SAF-FC

ChangShuo 18 Jan 01, 2023
PyTorch implementations of the beta divergence loss.

Beta Divergence Loss - PyTorch Implementation This repository contains code for a PyTorch implementation of the beta divergence loss. Dependencies Thi

Billy Carson 7 Nov 09, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 03, 2023
A DCGAN to generate anime faces using custom mined dataset

Anime-Face-GAN-Keras A DCGAN to generate anime faces using custom dataset in Keras. Dataset The dataset is created by crawling anime database websites

Pavitrakumar P 190 Jan 03, 2023
Data for "Driving the Herd: Search Engines as Content Influencers" paper

herding_data Data for "Driving the Herd: Search Engines as Content Influencers" paper Dataset description The collection contains 2250 documents, 30 i

0 Aug 17, 2021
Python package for visualizing the loss landscape of parameterized quantum algorithms.

orqviz A Python package for easily visualizing the loss landscape of Variational Quantum Algorithms by Zapata Computing Inc. orqviz provides a collect

Zapata Computing, Inc. 75 Dec 30, 2022
Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neurons learned with Gradient descent or LeLevenberg–Marquardt algorithm

Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neu

Filip Molcik 38 Dec 17, 2022
It's a powerful version of linebot

CTPS-FINAL Linbot-sever.py 主程式 Algorithm.py 推薦演算法,媒合餐廳端資料與顧客端資料 config.ini 儲存 channel-access-token、channel-secret 資料 Preface 生活在成大將近4年,我們每天的午餐時間看著形形色色

1 Oct 17, 2022
Fedlearn支持前沿算法研发的Python工具库 | Fedlearn algorithm toolkit for researchers

FedLearn-algo Installation Development Environment Checklist python3 (3.6 or 3.7) is required. To configure and check the development environment is c

89 Nov 14, 2022
Code for Dual Contrastive Learning for Unsupervised Image-to-Image Translation, NTIRE, CVPRW 2021.

arXiv Dual Contrastive Learning Adversarial Generative Networks (DCLGAN) We provide our PyTorch implementation of DCLGAN, which is a simple yet powerf

119 Dec 04, 2022
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023
Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Wilson 1.7k Dec 30, 2022
LBBA-boosted WSOD

LBBA-boosted WSOD Summary Our code is based on ruotianluo/pytorch-faster-rcnn and WSCDN Sincerely thanks for your resources. Newer version of our code

Martin Dong 20 Sep 19, 2022
frida工具的缝合怪

fridaUiTools fridaUiTools是一个界面化整理脚本的工具。新人的练手作品。参考项目ZenTracer,觉得既然可以界面化,那么应该可以把功能做的更加完善一些。跨平台支持:win、mac、linux 功能缝合怪。把一些常用的frida的hook脚本简单统一输出方式后,整合进来。并且

diveking 997 Jan 09, 2023
Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

SGMNet Implementation PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai C

87 Dec 11, 2022
Python package provinding tools for artistic interactive applications using AI

Documentation redrawing Python package provinding tools for artistic interactive applications using AI Created by ReDrawing Campinas team for the Open

ReDrawing Campinas 1 Sep 30, 2021
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
Experimental Python implementation of OpenVINO Inference Engine (very slow, limited functionality). All codes are written in Python. Easy to read and modify.

PyOpenVINO - An Experimental Python Implementation of OpenVINO Inference Engine (minimum-set) Description The PyOpenVINO is a spin-off product from my

Yasunori Shimura 7 Oct 31, 2022
Efficient training of deep recommenders on cloud.

HybridBackend Introduction HybridBackend is a training framework for deep recommenders which bridges the gap between evolving cloud infrastructure and

Alibaba 111 Dec 23, 2022