Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
journey-zhuang authored Dec 2, 2024
1 parent 3f6725b commit 7a877e5
Showing 1 changed file with 24 additions and 6 deletions.
30 changes: 24 additions & 6 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -171,20 +171,29 @@ <h1 class="title is-1 publication-title">Custyle: Gaussian Splatting Stylization
</div>
</section>

<section class="hero teaser">
<!-- <section class="hero teaser">
<div class="container is-max-desktop">
<div class="hero-body">
<video id="teaser" autoplay muted loop playsinline height="100%">
<!-- <source src="./static/videos/teaser.mp4"
type="video/mp4"> -->
<source src="./static/images/banner.png"
type="image/png">
<source src="./static/videos/teaser.mp4"
type="video/mp4">
</video>
<h2 class="subtitle has-text-centered">
<span class="dnerf">Custyle</span> enables versatile, high-quality 3D stylization across diverse styles.
</h2>
</div>
</div>
</section> -->

<section class="hero teaser">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="./static/images/banner.png" alt="banner image" />
<h2 class="subtitle has-text-centered">
<span class="dnerf">Custyle</span> enables versatile, high-quality 3D stylization across diverse styles.
</h2>
</div>
</div>
</section>


Expand Down Expand Up @@ -254,7 +263,16 @@ <h2 class="subtitle has-text-centered">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Recent research in 2D image stylization has shifted from traditional approaches based on universally pre-trained VGG networks or adversarial learning paradigms to diffusion models, which facilitate progressive and fine-grained style infusion. Nevertheless, the advancement has been merely explored for 3D stylization. In this paper, we introduce a comprehensive Gaussian Splatting (GS) stylization framework that facilitates style transfer from a customized reference image to a random 3D model. In general, we distill style scores of a pre-trained specialized diffusion model into GS optimization through an adaptive dynamic schedule. Specifically, we begin by embedding the style of a customized reference into the front view using the image stylization diffusion model. To ensure geometric consistency, the stylization adjustments of the front view are propagated to fixed perspectives using a multiview diffusion model guided by the reference image. Furthermore, we introduce a straightforward yet effective score distillation strategy, termed style outpainting, to progressively supplement the remaining views without ground truth supervisions. Additionally, we find that eliminating outlier Gaussians with excessively high gradients can effectively reduce the risk of stylization failure. We conduct extensive experiments on a collection of style references (ie. artistic paintings and customized designs) and 3D models to validate our framework. Comprehensive visualizations and quantitative analyses demonstrate our superiority in achieving high-fidelity, geometry-consistent GS stylization compared to previous methods.
Recent research in 2D image stylization has shifted from traditional approaches based on universally pre-trained VGG networks or adversarial learning paradigms to diffusion models, which facilitate progressive and fine-grained style infusion.
Nevertheless, the advancement has been merely explored for 3D stylization.
In this paper, we introduce a comprehensive Gaussian Splatting (GS) stylization framework that facilitates style transfer from a customized reference image to a random 3D model.
In general, we distill style scores of a pre-trained specialized diffusion model into GS optimization through an adaptive dynamic schedule.
Specifically, we begin by embedding the style of a customized reference into the front view using the image stylization diffusion model.
To ensure geometric consistency, the stylization adjustments of the front view are propagated to fixed perspectives using a multiview diffusion model guided by the reference image.
Furthermore, we introduce a straightforward yet effective score distillation strategy, termed \emph{style outpainting}, to progressively supplement the remaining views without ground truth supervisions.
Additionally, we find that eliminating outlier Gaussians with excessively high gradients can effectively reduce the risk of stylization failure.
We conduct extensive experiments on a collection of style references (\ie artistic paintings and customized designs) and 3D models to validate our framework.
Comprehensive visualizations and quantitative analyses demonstrate our superiority in achieving high-fidelity, geometry-consistent GS stylization compared to previous methods.
</p>
</div>
</div>
Expand Down

0 comments on commit 7a877e5

Please sign in to comment.