www.gradio.app Open in urlscan Pro
52.43.159.220  Public Scan

Submitted URL: http://www.gradio.app/demos/
Effective URL: https://www.gradio.app/demos/
Submission: On April 24 via manual from JP — Scanned from JP

Form analysis 0 forms found in the DOM

Text Content

⚑ Quickstart ✍️ Docs πŸ’‘ Guides 🎒 Demos
πŸ– Community
File an Issue Discuss Discord Newsletter


DEMOS

Here are some examples of what you can build with Gradio in just a few lines of
Python. Once you’re ready to learn, head over to the ⚑ Quickstart.

Check out more demos on Spaces.


πŸ–ŠοΈ TEXT & NATURAL LANGUAGE PROCESSING

Hello World Text Generation Autocomplete Sentiment Analysis Named Entity
Recognition Multilingual Translation

The simplest possible Gradio demo. It wraps a 'Hello {name}!' function in an
Interface that accepts and returns text.

import gradio as gr

def greet(name):
    return "Hello " + name + "!"

demo = gr.Interface(fn=greet, inputs="text", outputs="text")
    
if __name__ == "__main__":
    demo.launch()   

name
ClearSubmit
output

gradio/hello_world built with Gradio. Hosted on Spaces

This text generation demo takes in input text and returns generated text. It
uses the Transformers library to set up the model and has two examples.

import gradio as gr
from transformers import pipeline

generator = pipeline('text-generation', model='gpt2')

def generate(text):
    result = generator(text, max_length=30, num_return_sequences=1)
    return result[0]["generated_text"]

examples = [
    ["The Moon's orbit around Earth has"],
    ["The smooth Borealis basin in the Northern Hemisphere covers 40%"],
]

demo = gr.Interface(
    fn=generate,
    inputs=gr.inputs.Textbox(lines=5, label="Input Text"),
    outputs=gr.outputs.Textbox(label="Generated Text"),
    examples=examples
)

demo.launch()


Loading...

gradio/text_generation built with Gradio. Hosted on Spaces

This text generation demo works like autocomplete. There's only one textbox and
it's used for both the input and the output. The demo loads the model as an
interface, and uses that interface as an API. It then uses blocks to create the
UI. All of this is done in less than 10 lines of code.

import gradio as gr
import os

# save your HF API token from https:/hf.co/settings/tokens as an env variable to avoid rate limiting
auth_token = os.getenv("auth_token")

# load a model from https://hf.co/models as an interface, then use it as an api 
# you can remove the api_key parameter if you don't care about rate limiting. 
api = gr.load("huggingface/EleutherAI/gpt-j-6B", api_key=auth_token)

def complete_with_gpt(text):
    return text[:-50] + api(text[-50:])

with gr.Blocks() as demo:
    textbox = gr.Textbox(placeholder="Type here...", lines=4)
    btn = gr.Button("Autocomplete")
    
    # define what will run when the button is clicked, here the textbox is used as both an input and an output
    btn.click(fn=complete_with_gpt, inputs=textbox, outputs=textbox, queue=False)

demo.launch()

Loading...

gradio/autocomplete built with Gradio. Hosted on Spaces

This sentiment analaysis demo takes in input text and returns its classification
for either positive, negative or neutral using Gradio's Label output. It also
uses the default interpretation method so users can click the Interpret button
after a submission and see which words had the biggest effect on the output.

import gradio as gr
import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer

nltk.download("vader_lexicon")
sid = SentimentIntensityAnalyzer()

def sentiment_analysis(text):
    scores = sid.polarity_scores(text)
    del scores["compound"]
    return scores

demo = gr.Interface(
    fn=sentiment_analysis, 
    inputs=gr.Textbox(placeholder="Enter a positive or negative sentence here..."), 
    outputs="label", 
    interpretation="default",
    examples=[["This is wonderful!"]])

demo.launch()

Loading...

gradio/sentiment_analysis built with Gradio. Hosted on Spaces

This simple demo takes advantage of Gradio's HighlightedText, JSON and HTML
outputs to create a clear NER segmentation.

import gradio as gr
import os
os.system('python -m spacy download en_core_web_sm')
import spacy
from spacy import displacy

nlp = spacy.load("en_core_web_sm")

def text_analysis(text):
    doc = nlp(text)
    html = displacy.render(doc, style="dep", page=True)
    html = (
        ""
        + html
        + ""
    )
    pos_count = {
        "char_count": len(text),
        "token_count": 0,
    }
    pos_tokens = []

    for token in doc:
        pos_tokens.extend([(token.text, token.pos_), (" ", None)])

    return pos_tokens, pos_count, html

demo = gr.Interface(
    text_analysis,
    gr.Textbox(placeholder="Enter sentence here..."),
    ["highlight", "json", "html"],
    examples=[
        ["What a beautiful morning for a walk!"],
        ["It was the best of times, it was the worst of times."],
    ],
)

demo.launch()


Loading...

gradio/text_analysis built with Gradio. Hosted on Spaces

This translation demo takes in the text, source and target languages, and
returns the translation. It uses the Transformers library to set up the model
and has a title, description, and example.

import gradio as gr
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
import torch

# this model was loaded from https://hf.co/models
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
device = 0 if torch.cuda.is_available() else -1
LANGS = ["ace_Arab", "eng_Latn", "fra_Latn", "spa_Latn"]

def translate(text, src_lang, tgt_lang):
    """
    Translate the text from source lang to target lang
    """
    translation_pipeline = pipeline("translation", model=model, tokenizer=tokenizer, src_lang=src_lang, tgt_lang=tgt_lang, max_length=400, device=device)
    result = translation_pipeline(text)
    return result[0]['translation_text']

demo = gr.Interface(
    fn=translate,
    inputs=[
        gr.components.Textbox(label="Text"),
        gr.components.Dropdown(label="Source Language", choices=LANGS),
        gr.components.Dropdown(label="Target Language", choices=LANGS),
    ],
    outputs=["text"],
    examples=[["Building a translation demo with Gradio is so easy!", "eng_Latn", "spa_Latn"]],
    cache_examples=False,
    title="Translation Demo",
    description="This demo is a simplified version of the original [NLLB-Translator](https://huggingface.co/spaces/Narrativaai/NLLB-Translator) space"
)

demo.launch()

Loading...

gradio/translation built with Gradio. Hosted on Spaces


πŸ–ΌοΈ IMAGES & COMPUTER VISION

Image Classification Image Segmentation Image Transformation with AnimeGAN Image
Generation (Fake GAN) Iterative Output 3D Models

Simple image classification in Pytorch with Gradio's Image input and Label
output.

import gradio as gr
import torch
import requests
from torchvision import transforms

model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")

def predict(inp):
  inp = transforms.ToTensor()(inp).unsqueeze(0)
  with torch.no_grad():
    prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
    confidences = {labels[i]: float(prediction[i]) for i in range(1000)}    
  return confidences

demo = gr.Interface(fn=predict, 
             inputs=gr.inputs.Image(type="pil"),
             outputs=gr.outputs.Label(num_top_classes=3),
             examples=[["cheetah.jpg"]],
             )
             
demo.launch()

Error

This space is experiencing an issue.

Please contact the author of the space to let them know.

gradio/image_classification built with Gradio. Hosted on Spaces

Simple image segmentation using gradio's AnnotatedImage component.

import gradio as gr
import numpy as np
import random

with gr.Blocks() as demo:
    section_labels = [
        "apple",
        "banana",
        "carrot",
        "donut",
        "eggplant",
        "fish",
        "grapes",
        "hamburger",
        "ice cream",
        "juice",
    ]

    with gr.Row():
        num_boxes = gr.Slider(0, 5, 2, step=1, label="Number of boxes")
        num_segments = gr.Slider(0, 5, 1, step=1, label="Number of segments")

    with gr.Row():
        img_input = gr.Image()
        img_output = gr.AnnotatedImage().style(
            color_map={"banana": "#a89a00", "carrot": "#ffae00"}
        )

    section_btn = gr.Button("Identify Sections")
    selected_section = gr.Textbox(label="Selected Section")

    def section(img, num_boxes, num_segments):
        sections = []
        for a in range(num_boxes):
            x = random.randint(0, img.shape[1])
            y = random.randint(0, img.shape[0])
            w = random.randint(0, img.shape[1] - x)
            h = random.randint(0, img.shape[0] - y)
            sections.append(((x, y, x + w, y + h), section_labels[a]))
        for b in range(num_segments):
            x = random.randint(0, img.shape[1])
            y = random.randint(0, img.shape[0])
            r = random.randint(0, min(x, y, img.shape[1] - x, img.shape[0] - y))
            mask = np.zeros(img.shape[:2])
            for i in range(img.shape[0]):
                for j in range(img.shape[1]):
                    dist_square = (i - y) ** 2 + (j - x) ** 2
                    if dist_square < r**2:
                        mask[i, j] = round((r**2 - dist_square) / r**2 * 4) / 4
            sections.append((mask, section_labels[b + num_boxes]))
        return (img, sections)

    section_btn.click(section, [img_input, num_boxes, num_segments], img_output)

    def select_section(evt: gr.SelectData):
        return section_labels[evt.index]

    img_output.select(select_section, None, selected_section)


demo.launch()


Loading...

gradio/image_segmentation built with Gradio. Hosted on Spaces

Recreate the viral AnimeGAN image transformation demo.

import gradio as gr
import torch

model2 = torch.hub.load(
    "AK391/animegan2-pytorch:main",
    "generator",
    pretrained=True,
    progress=False
)
model1 = torch.hub.load("AK391/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1")
face2paint = torch.hub.load(
    'AK391/animegan2-pytorch:main', 'face2paint', 
    size=512,side_by_side=False
)

def inference(img, ver):
    if ver == 'version 2 (πŸ”Ί robustness,πŸ”» stylization)':
        out = face2paint(model2, img)
    else:
        out = face2paint(model1, img)
    return out

title = "AnimeGANv2"
description = "Gradio Demo for AnimeGanv2 Face Portrait. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please use a cropped portrait picture for best results similar to the examples below."
article = "Github Repo Pytorch "
examples=[['groot.jpeg','version 2 (πŸ”Ί robustness,πŸ”» stylization)'],['gongyoo.jpeg','version 1 (πŸ”Ί stylization, πŸ”» robustness)']]

demo = gr.Interface(
    fn=inference, 
    inputs=[gr.inputs.Image(type="pil"),gr.inputs.Radio(['version 1 (πŸ”Ί stylization, πŸ”» robustness)','version 2 (πŸ”Ί robustness,πŸ”» stylization)'], type="value", default='version 2 (πŸ”Ί robustness,πŸ”» stylization)', label='version')], 
    outputs=gr.outputs.Image(type="pil"),
    title=title,
    description=description,
    article=article,
    examples=examples)

demo.launch()

Loading...

gradio/animeganv2 built with Gradio. Hosted on Spaces

This is a fake GAN that shows how to create a text-to-image interface for image
generation. Check out the Stable Diffusion demo for more:
https://hf.co/spaces/stabilityai/stable-diffusion/

# This demo needs to be run from the repo folder.
# python demo/fake_gan/run.py
import random

import gradio as gr


def fake_gan():
    images = [
        (random.choice(
            [
                "https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=387&q=80",
                "https://images.unsplash.com/photo-1554151228-14d9def656e4?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=386&q=80",
                "https://images.unsplash.com/photo-1542909168-82c3e7fdca5c?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8aHVtYW4lMjBmYWNlfGVufDB8fDB8fA%3D%3D&w=1000&q=80",
                "https://images.unsplash.com/photo-1546456073-92b9f0a8d413?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=387&q=80",
                "https://images.unsplash.com/photo-1601412436009-d964bd02edbc?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=464&q=80",
            ]
        ), f"label {i}" if i != 0 else "label" * 50)
        for i in range(3)
    ]
    return images


with gr.Blocks() as demo:
    with gr.Column(variant="panel"):
        with gr.Row(variant="compact"):
            text = gr.Textbox(
                label="Enter your prompt",
                show_label=False,
                max_lines=1,
                placeholder="Enter your prompt",
            ).style(
                container=False,
            )
            btn = gr.Button("Generate image").style(full_width=False)

        gallery = gr.Gallery(
            label="Generated images", show_label=False, elem_id="gallery"
        ).style(columns=[2], rows=[2], object_fit="contain", height="auto")

    btn.click(fake_gan, None, gallery)

if __name__ == "__main__":
    demo.launch()


Loading...

gradio/fake_gan built with Gradio. Hosted on Spaces

This demo uses a fake model to showcase iterative output. The Image output will
update every time a generator is returned until the final image.

import gradio as gr
import numpy as np
import time

# define core fn, which returns a generator {steps} times before returning the image
def fake_diffusion(steps):
    for _ in range(steps):
        time.sleep(1)
        image = np.random.random((600, 600, 3))
        yield image
    image = "https://gradio-builds.s3.amazonaws.com/diffusion_image/cute_dog.jpg"
    yield image


demo = gr.Interface(fake_diffusion, inputs=gr.Slider(1, 10, 3), outputs="image")

# define queue - required for generators
demo.queue()

demo.launch()


Loading...

gradio/fake_diffusion built with Gradio. Hosted on Spaces

A demo for predicting the depth of an image and generating a 3D model of it.

import gradio as gr
from transformers import DPTFeatureExtractor, DPTForDepthEstimation
import torch
import numpy as np
from PIL import Image
import open3d as o3d
from pathlib import Path

feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-large")
model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large")

def process_image(image_path):
    image_path = Path(image_path)
    image_raw = Image.open(image_path)
    image = image_raw.resize(
        (800, int(800 * image_raw.size[1] / image_raw.size[0])),
        Image.Resampling.LANCZOS)

    # prepare image for the model
    encoding = feature_extractor(image, return_tensors="pt")

    # forward pass
    with torch.no_grad():
        outputs = model(**encoding)
        predicted_depth = outputs.predicted_depth

    # interpolate to original size
    prediction = torch.nn.functional.interpolate(
        predicted_depth.unsqueeze(1),
        size=image.size[::-1],
        mode="bicubic",
        align_corners=False,
    ).squeeze()
    output = prediction.cpu().numpy()
    depth_image = (output * 255 / np.max(output)).astype('uint8')
    try:
        gltf_path = create_3d_obj(np.array(image), depth_image, image_path)
        img = Image.fromarray(depth_image)
        return [img, gltf_path, gltf_path]
    except Exception:
        gltf_path = create_3d_obj(
            np.array(image), depth_image, image_path, depth=8)
        img = Image.fromarray(depth_image)
        return [img, gltf_path, gltf_path]
    except:
        print("Error reconstructing 3D model")
        raise Exception("Error reconstructing 3D model")


def create_3d_obj(rgb_image, depth_image, image_path, depth=10):
    depth_o3d = o3d.geometry.Image(depth_image)
    image_o3d = o3d.geometry.Image(rgb_image)
    rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
        image_o3d, depth_o3d, convert_rgb_to_intensity=False)
    w = int(depth_image.shape[1])
    h = int(depth_image.shape[0])

    camera_intrinsic = o3d.camera.PinholeCameraIntrinsic()
    camera_intrinsic.set_intrinsics(w, h, 500, 500, w/2, h/2)

    pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
        rgbd_image, camera_intrinsic)

    print('normals')
    pcd.normals = o3d.utility.Vector3dVector(
        np.zeros((1, 3)))  # invalidate existing normals
    pcd.estimate_normals(
        search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.01, max_nn=30))
    pcd.orient_normals_towards_camera_location(
        camera_location=np.array([0., 0., 1000.]))
    pcd.transform([[1, 0, 0, 0],
                   [0, -1, 0, 0],
                   [0, 0, -1, 0],
                   [0, 0, 0, 1]])
    pcd.transform([[-1, 0, 0, 0],
                   [0, 1, 0, 0],
                   [0, 0, 1, 0],
                   [0, 0, 0, 1]])

    print('run Poisson surface reconstruction')
    with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug):
        mesh_raw, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(
            pcd, depth=depth, width=0, scale=1.1, linear_fit=True)

    voxel_size = max(mesh_raw.get_max_bound() - mesh_raw.get_min_bound()) / 256
    print(f'voxel_size = {voxel_size:e}')
    mesh = mesh_raw.simplify_vertex_clustering(
        voxel_size=voxel_size,
        contraction=o3d.geometry.SimplificationContraction.Average)

    # vertices_to_remove = densities < np.quantile(densities, 0.001)
    # mesh.remove_vertices_by_mask(vertices_to_remove)
    bbox = pcd.get_axis_aligned_bounding_box()
    mesh_crop = mesh.crop(bbox)
    gltf_path = f'./{image_path.stem}.gltf'
    o3d.io.write_triangle_mesh(
        gltf_path, mesh_crop, write_triangle_uvs=True)
    return gltf_path

title = "Demo: zero-shot depth estimation with DPT + 3D Point Cloud"
description = "This demo is a variation from the original DPT Demo. It uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object."
examples = [["examples/1-jonathan-borba-CgWTqYxHEkg-unsplash.jpg"]]

iface = gr.Interface(fn=process_image,
                     inputs=[gr.Image(
                         type="filepath", label="Input Image")],
                     outputs=[gr.Image(label="predicted depth", type="pil"),
                              gr.Model3D(label="3d mesh reconstruction", clear_color=[
                                                 1.0, 1.0, 1.0, 1.0]),
                              gr.File(label="3d gLTF")],
                     title=title,
                     description=description,
                     examples=examples,
                     allow_flagging="never",
                     cache_examples=False)

iface.launch(debug=True, enable_queue=False)

Loading...

gradio/depth_estimation built with Gradio. Hosted on Spaces


πŸ“ˆ TABULAR DATA & PLOTS

Interactive Dashboard Dashboard with Live Updates Interactive Map of AirBnB
Locations Outbreak Forecast Clustering with Scikit-Learn Time Series Forecasting
Income Classification with XGBoost Leaderboard Tax Calculator

This demo shows how you can build an interactive dashboard with gradio. Click on
a python library on the left hand side and then on the right hand side click on
the metric you'd like to see plot over time. Data is pulled from HuggingFace Hub
datasets.

import gradio as gr
import pandas as pd
import plotly.express as px
from helpers import *


LIBRARIES = ["accelerate", "datasets", "diffusers", "evaluate", "gradio", "hub_docs",
             "huggingface_hub", "optimum", "pytorch_image_models", "tokenizers", "transformers"]


def create_pip_plot(libraries, pip_choices):
    if "Pip" not in pip_choices:
        return gr.update(visible=False)
    output = retrieve_pip_installs(libraries, "Cumulated" in pip_choices)
    df = pd.DataFrame(output).melt(id_vars="day")
    plot = px.line(df, x="day", y="value", color="variable",
                   title="Pip installs")
    plot.update_layout(legend=dict(x=0.5, y=0.99),  title_x=0.5, legend_title_text="")
    return gr.update(value=plot, visible=True)


def create_star_plot(libraries, star_choices):
    if "Stars" not in star_choices:
        return gr.update(visible=False)
    output = retrieve_stars(libraries, "Week over Week" in star_choices)
    df = pd.DataFrame(output).melt(id_vars="day")
    plot = px.line(df, x="day", y="value", color="variable",
                   title="Number of stargazers")
    plot.update_layout(legend=dict(x=0.5, y=0.99),  title_x=0.5, legend_title_text="")
    return gr.update(value=plot, visible=True)


def create_issue_plot(libraries, issue_choices):
    if "Issue" not in issue_choices:
        return gr.update(visible=False)
    output = retrieve_issues(libraries,
                             exclude_org_members="Exclude org members" in issue_choices,
                             week_over_week="Week over Week" in issue_choices)
    df = pd.DataFrame(output).melt(id_vars="day")
    plot = px.line(df, x="day", y="value", color="variable",
                   title="Cumulated number of issues, PRs, and comments",
                   )
    plot.update_layout(legend=dict(x=0.5, y=0.99),  title_x=0.5, legend_title_text="")
    return gr.update(value=plot, visible=True)


with gr.Blocks() as demo:
    with gr.Row():
        with gr.Column():
            with gr.Box():
                gr.Markdown("## Select libraries to display")
                libraries = gr.CheckboxGroup(choices=LIBRARIES, label="")
        with gr.Column():
            with gr.Box():
                gr.Markdown("## Select graphs to display")
                pip = gr.CheckboxGroup(choices=["Pip", "Cumulated"], label="")
                stars = gr.CheckboxGroup(choices=["Stars", "Week over Week"], label="")
                issues = gr.CheckboxGroup(choices=["Issue", "Exclude org members", "week over week"], label="")
    with gr.Row():
        fetch = gr.Button(value="Fetch")
    with gr.Row():
        with gr.Column():
            pip_plot = gr.Plot(visible=False)
            star_plot = gr.Plot(visible=False)
            issue_plot = gr.Plot(visible=False)

    fetch.click(create_pip_plot, inputs=[libraries, pip], outputs=pip_plot)
    fetch.click(create_star_plot, inputs=[libraries, stars], outputs=star_plot)
    fetch.click(create_issue_plot, inputs=[libraries, issues], outputs=issue_plot)


if __name__ == "__main__":
    demo.launch()

Loading...

gradio/dashboard built with Gradio. Hosted on Spaces

This demo shows how you can build a live interactive dashboard with gradio. The
current time is refreshed every second and the plot every half second by using
the 'every' keyword in the event handler. Changing the value of the slider will
control the period of the sine curve (the distance between peaks).

import math

import pandas as pd

import gradio as gr
import datetime
import numpy as np


def get_time():
    return datetime.datetime.now()


plot_end = 2 * math.pi


def get_plot(period=1):
    global plot_end
    x = np.arange(plot_end - 2 * math.pi, plot_end, 0.02)
    y = np.sin(2 * math.pi * period * x)
    update = gr.LinePlot.update(
        value=pd.DataFrame({"x": x, "y": y}),
        x="x",
        y="y",
        title="Plot (updates every second)",
        width=600,
        height=350,
    )
    plot_end += 2 * math.pi
    if plot_end > 1000:
        plot_end = 2 * math.pi
    return update


with gr.Blocks() as demo:
    with gr.Row():
        with gr.Column():
            c_time2 = gr.Textbox(label="Current Time refreshed every second")
            gr.Textbox(
                "Change the value of the slider to automatically update the plot",
                label="",
            )
            period = gr.Slider(
                label="Period of plot", value=1, minimum=0, maximum=10, step=1
            )
            plot = gr.LinePlot(show_label=False)
        with gr.Column():
            name = gr.Textbox(label="Enter your name")
            greeting = gr.Textbox(label="Greeting")
            button = gr.Button(value="Greet")
            button.click(lambda s: f"Hello {s}", name, greeting)

    demo.load(lambda: datetime.datetime.now(), None, c_time2, every=1)
    dep = demo.load(get_plot, None, plot, every=1)
    period.change(get_plot, period, plot, every=1, cancels=[dep])

if __name__ == "__main__":
    demo.queue().launch()


Loading...

gradio/live_dashboard built with Gradio. Hosted on Spaces

Display an interactive map of AirBnB locations with Plotly. Data is hosted on
HuggingFace Datasets.

import gradio as gr
import plotly.graph_objects as go
from datasets import load_dataset

dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
df = dataset.to_pandas()

def filter_map(min_price, max_price, boroughs):

    filtered_df = df[(df['neighbourhood_group'].isin(boroughs)) & 
          (df['price'] > min_price) & (df['price'] < max_price)]
    names = filtered_df["name"].tolist()
    prices = filtered_df["price"].tolist()
    text_list = [(names[i], prices[i]) for i in range(0, len(names))]
    fig = go.Figure(go.Scattermapbox(
            customdata=text_list,
            lat=filtered_df['latitude'].tolist(),
            lon=filtered_df['longitude'].tolist(),
            mode='markers',
            marker=go.scattermapbox.Marker(
                size=6
            ),
            hoverinfo="text",
            hovertemplate='Name: %{customdata[0]}Price: $%{customdata[1]}'
        ))

    fig.update_layout(
        mapbox_style="open-street-map",
        hovermode='closest',
        mapbox=dict(
            bearing=0,
            center=go.layout.mapbox.Center(
                lat=40.67,
                lon=-73.90
            ),
            pitch=0,
            zoom=9
        ),
    )

    return fig

with gr.Blocks() as demo:
    with gr.Column():
        with gr.Row():
            min_price = gr.Number(value=250, label="Minimum Price")
            max_price = gr.Number(value=1000, label="Maximum Price")
        boroughs = gr.CheckboxGroup(choices=["Queens", "Brooklyn", "Manhattan", "Bronx", "Staten Island"], value=["Queens", "Brooklyn"], label="Select Boroughs:")
        btn = gr.Button(value="Update Filter")
        map = gr.Plot().style()
    demo.load(filter_map, [min_price, max_price, boroughs], map)
    btn.click(filter_map, [min_price, max_price, boroughs], map)

if __name__ == "__main__":
    demo.launch()

Error

This space is experiencing an issue.

Please contact the author of the space to let them know.

gradio/map_airbnb built with Gradio. Hosted on Spaces

Generate a plot based on 5 inputs.

import altair

import gradio as gr
from math import sqrt
import matplotlib

matplotlib.use("Agg")

import matplotlib.pyplot as plt
import numpy as np
import plotly.express as px
import pandas as pd


def outbreak(plot_type, r, month, countries, social_distancing):
    months = ["January", "February", "March", "April", "May"]
    m = months.index(month)
    start_day = 30 * m
    final_day = 30 * (m + 1)
    x = np.arange(start_day, final_day + 1)
    pop_count = {"USA": 350, "Canada": 40, "Mexico": 300, "UK": 120}
    if social_distancing:
        r = sqrt(r)
    df = pd.DataFrame({"day": x})
    for country in countries:
        df[country] = x ** (r) * (pop_count[country] + 1)

    if plot_type == "Matplotlib":
        fig = plt.figure()
        plt.plot(df["day"], df[countries].to_numpy())
        plt.title("Outbreak in " + month)
        plt.ylabel("Cases")
        plt.xlabel("Days since Day 0")
        plt.legend(countries)
        return fig
    elif plot_type == "Plotly":
        fig = px.line(df, x="day", y=countries)
        fig.update_layout(
            title="Outbreak in " + month,
            xaxis_title="Cases",
            yaxis_title="Days Since Day 0",
        )
        return fig
    elif plot_type == "Altair":
        df = df.melt(id_vars="day").rename(columns={"variable": "country"})
        fig = altair.Chart(df).mark_line().encode(x="day", y='value', color='country')
        return fig
    else:
        raise ValueError("A plot type must be selected")


inputs = [
    gr.Dropdown(["Matplotlib", "Plotly", "Altair"], label="Plot Type"),
    gr.Slider(1, 4, 3.2, label="R"),
    gr.Dropdown(["January", "February", "March", "April", "May"], label="Month"),
    gr.CheckboxGroup(
        ["USA", "Canada", "Mexico", "UK"], label="Countries", value=["USA", "Canada"]
    ),
    gr.Checkbox(label="Social Distancing?"),
]
outputs = gr.Plot()

demo = gr.Interface(
    fn=outbreak,
    inputs=inputs,
    outputs=outputs,
    examples=[
        ["Matplotlib", 2, "March", ["Mexico", "UK"], True],
        ["Altair", 2, "March", ["Mexico", "Canada"], True],
        ["Plotly", 3.6, "February", ["Canada", "Mexico", "UK"], False],
    ],
    cache_examples=True,
)

if __name__ == "__main__":
    demo.launch()


Loading...

gradio/outbreak_forecast built with Gradio. Hosted on Spaces

This demo built with Blocks generates 9 plots based on the input.

import gradio as gr
import math
from functools import partial
import matplotlib.pyplot as plt
import numpy as np
from sklearn.cluster import (
    AgglomerativeClustering, Birch, DBSCAN, KMeans, MeanShift, OPTICS, SpectralClustering, estimate_bandwidth
)
from sklearn.datasets import make_blobs, make_circles, make_moons
from sklearn.mixture import GaussianMixture
from sklearn.neighbors import kneighbors_graph
from sklearn.preprocessing import StandardScaler

plt.style.use('seaborn')
SEED = 0
MAX_CLUSTERS = 10
N_SAMPLES = 1000
N_COLS = 3
FIGSIZE = 7, 7  # does not affect size in webpage
COLORS = [
    'blue', 'orange', 'green', 'red', 'purple', 'brown', 'pink', 'gray', 'olive', 'cyan'
]
assert len(COLORS) >= MAX_CLUSTERS, "Not enough different colors for all clusters"
np.random.seed(SEED)


def normalize(X):
    return StandardScaler().fit_transform(X)

def get_regular(n_clusters):
    # spiral pattern
    centers = [
        [0, 0],
        [1, 0],
        [1, 1],
        [0, 1],
        [-1, 1],
        [-1, 0],
        [-1, -1],
        [0, -1],
        [1, -1],
        [2, -1],
    ][:n_clusters]
    assert len(centers) == n_clusters
    X, labels = make_blobs(n_samples=N_SAMPLES, centers=centers, cluster_std=0.25, random_state=SEED)
    return normalize(X), labels


def get_circles(n_clusters):
    X, labels = make_circles(n_samples=N_SAMPLES, factor=0.5, noise=0.05, random_state=SEED)
    return normalize(X), labels


def get_moons(n_clusters):
    X, labels = make_moons(n_samples=N_SAMPLES, noise=0.05, random_state=SEED)
    return normalize(X), labels


def get_noise(n_clusters):
    np.random.seed(SEED)
    X, labels = np.random.rand(N_SAMPLES, 2), np.random.randint(0, n_clusters, size=(N_SAMPLES,))
    return normalize(X), labels


def get_anisotropic(n_clusters):
    X, labels = make_blobs(n_samples=N_SAMPLES, centers=n_clusters, random_state=170)
    transformation = [[0.6, -0.6], [-0.4, 0.8]]
    X = np.dot(X, transformation)
    return X, labels


def get_varied(n_clusters):
    cluster_std = [1.0, 2.5, 0.5, 1.0, 2.5, 0.5, 1.0, 2.5, 0.5, 1.0][:n_clusters]
    assert len(cluster_std) == n_clusters
    X, labels = make_blobs(
        n_samples=N_SAMPLES, centers=n_clusters, cluster_std=cluster_std, random_state=SEED
    )
    return normalize(X), labels


def get_spiral(n_clusters):
    # from https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_clustering.html
    np.random.seed(SEED)
    t = 1.5 * np.pi * (1 + 3 * np.random.rand(1, N_SAMPLES))
    x = t * np.cos(t)
    y = t * np.sin(t)
    X = np.concatenate((x, y))
    X += 0.7 * np.random.randn(2, N_SAMPLES)
    X = np.ascontiguousarray(X.T)

    labels = np.zeros(N_SAMPLES, dtype=int)
    return normalize(X), labels


DATA_MAPPING = {
    'regular': get_regular,
    'circles': get_circles,
    'moons': get_moons,
    'spiral': get_spiral,
    'noise': get_noise,
    'anisotropic': get_anisotropic,
    'varied': get_varied,
}


def get_groundtruth_model(X, labels, n_clusters, **kwargs):
    # dummy model to show true label distribution
    class Dummy:
        def __init__(self, y):
            self.labels_ = labels

    return Dummy(labels)


def get_kmeans(X, labels, n_clusters, **kwargs):
    model = KMeans(init="k-means++", n_clusters=n_clusters, n_init=10, random_state=SEED)
    model.set_params(**kwargs)
    return model.fit(X)


def get_dbscan(X, labels, n_clusters, **kwargs):
    model = DBSCAN(eps=0.3)
    model.set_params(**kwargs)
    return model.fit(X)


def get_agglomerative(X, labels, n_clusters, **kwargs):
    connectivity = kneighbors_graph(
        X, n_neighbors=n_clusters, include_self=False
    )
    # make connectivity symmetric
    connectivity = 0.5 * (connectivity + connectivity.T)
    model = AgglomerativeClustering(
        n_clusters=n_clusters, linkage="ward", connectivity=connectivity
    )
    model.set_params(**kwargs)
    return model.fit(X)


def get_meanshift(X, labels, n_clusters, **kwargs):
    bandwidth = estimate_bandwidth(X, quantile=0.25)
    model = MeanShift(bandwidth=bandwidth, bin_seeding=True)
    model.set_params(**kwargs)
    return model.fit(X)


def get_spectral(X, labels, n_clusters, **kwargs):
    model = SpectralClustering(
        n_clusters=n_clusters,
        eigen_solver="arpack",
        affinity="nearest_neighbors",
    )
    model.set_params(**kwargs)
    return model.fit(X)


def get_optics(X, labels, n_clusters, **kwargs):
    model = OPTICS(
        min_samples=7,
        xi=0.05,
        min_cluster_size=0.1,
    )
    model.set_params(**kwargs)
    return model.fit(X)


def get_birch(X, labels, n_clusters, **kwargs):
    model = Birch(n_clusters=n_clusters)
    model.set_params(**kwargs)
    return model.fit(X)


def get_gaussianmixture(X, labels, n_clusters, **kwargs):
    model = GaussianMixture(
        n_components=n_clusters, covariance_type="full", random_state=SEED,
    )
    model.set_params(**kwargs)
    return model.fit(X)


MODEL_MAPPING = {
    'True labels': get_groundtruth_model,
    'KMeans': get_kmeans,
    'DBSCAN': get_dbscan,
    'MeanShift': get_meanshift,
    'SpectralClustering': get_spectral,
    'OPTICS': get_optics,
    'Birch': get_birch,
    'GaussianMixture': get_gaussianmixture,
    'AgglomerativeClustering': get_agglomerative,
}


def plot_clusters(ax, X, labels):
    set_clusters = set(labels)
    set_clusters.discard(-1)  # -1 signifiies outliers, which we plot separately
    for label, color in zip(sorted(set_clusters), COLORS):
        idx = labels == label
        if not sum(idx):
            continue
        ax.scatter(X[idx, 0], X[idx, 1], color=color)

    # show outliers (if any)
    idx = labels == -1
    if sum(idx):
        ax.scatter(X[idx, 0], X[idx, 1], c='k', marker='x')

    ax.grid(None)
    ax.set_xticks([])
    ax.set_yticks([])
    return ax


def cluster(dataset: str, n_clusters: int, clustering_algorithm: str):
    if isinstance(n_clusters, dict):
        n_clusters = n_clusters['value']
    else:
        n_clusters = int(n_clusters)

    X, labels = DATA_MAPPING[dataset](n_clusters)
    model = MODEL_MAPPING[clustering_algorithm](X, labels, n_clusters=n_clusters)
    if hasattr(model, "labels_"):
        y_pred = model.labels_.astype(int)
    else:
        y_pred = model.predict(X)

    fig, ax = plt.subplots(figsize=FIGSIZE)

    plot_clusters(ax, X, y_pred)
    ax.set_title(clustering_algorithm, fontsize=16)

    return fig


title = "Clustering with Scikit-learn"
description = (
    "This example shows how different clustering algorithms work. Simply pick "
    "the dataset and the number of clusters to see how the clustering algorithms work. "
    "Colored cirles are (predicted) labels and black x are outliers."
)


def iter_grid(n_rows, n_cols):
    # create a grid using gradio Block
    for _ in range(n_rows):
        with gr.Row():
            for _ in range(n_cols):
                with gr.Column():
                    yield

with gr.Blocks(title=title) as demo:
    gr.HTML(f"{title}")
    gr.Markdown(description)

    input_models = list(MODEL_MAPPING)
    input_data = gr.Radio(
        list(DATA_MAPPING),
        value="regular",
        label="dataset"
    )
    input_n_clusters = gr.Slider(
        minimum=1,
        maximum=MAX_CLUSTERS,
        value=4,
        step=1,
        label='Number of clusters'
    )
    n_rows = int(math.ceil(len(input_models) / N_COLS))
    counter = 0
    for _ in iter_grid(n_rows, N_COLS):
        if counter >= len(input_models):
            break

        input_model = input_models[counter]
        plot = gr.Plot(label=input_model)
        fn = partial(cluster, clustering_algorithm=input_model)
        input_data.change(fn=fn, inputs=[input_data, input_n_clusters], outputs=plot)
        input_n_clusters.change(fn=fn, inputs=[input_data, input_n_clusters], outputs=plot)
        counter += 1

demo.launch()


Loading...

gradio/clustering built with Gradio. Hosted on Spaces

A simple dashboard showing pypi stats for python libraries. Updates on load, and
has no buttons!

import gradio as gr
import pypistats
from datetime import date
from dateutil.relativedelta import relativedelta
import pandas as pd
from prophet import Prophet
pd.options.plotting.backend = "plotly"

def get_forecast(lib, time):

    data = pypistats.overall(lib, total=True, format="pandas")
    data = data.groupby("category").get_group("with_mirrors").sort_values("date")
    start_date = date.today() - relativedelta(months=int(time.split(" ")[0]))
    df = data[(data['date'] > str(start_date))] 

    df1 = df[['date','downloads']]
    df1.columns = ['ds','y']

    m = Prophet()
    m.fit(df1)
    future = m.make_future_dataframe(periods=90)
    forecast = m.predict(future)
    fig1 = m.plot(forecast)
    return fig1 

with gr.Blocks() as demo:
    gr.Markdown(
    """
    **Pypi Download Stats πŸ“ˆ with Prophet Forecasting**: see live download stats for popular open-source libraries πŸ€— along with a 3 month forecast using Prophet. The [ source code for this Gradio demo is here](https://huggingface.co/spaces/gradio/timeseries-forecasting-with-prophet/blob/main/app.py).
    """)
    with gr.Row():
        lib = gr.Dropdown(["pandas", "scikit-learn", "torch", "prophet"], label="Library", value="pandas")
        time = gr.Dropdown(["3 months", "6 months", "9 months", "12 months"], label="Downloads over the last...", value="12 months")

    plt = gr.Plot()

    lib.change(get_forecast, [lib, time], plt, queue=False)
    time.change(get_forecast, [lib, time], plt, queue=False)    
    demo.load(get_forecast, [lib, time], plt, queue=False)    

demo.launch()

Loading...

gradio/timeseries-forecasting-with-prophet built with Gradio. Hosted on Spaces

This demo takes in 12 inputs from the user in dropdowns and sliders and predicts
income. It also has a separate button for explaining the prediction.

import gradio as gr
import random
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import shap
import xgboost as xgb
from datasets import load_dataset


matplotlib.use("Agg")
dataset = load_dataset("scikit-learn/adult-census-income")
X_train = dataset["train"].to_pandas()
_ = X_train.pop("fnlwgt")
_ = X_train.pop("race")
y_train = X_train.pop("income")
y_train = (y_train == ">50K").astype(int)
categorical_columns = [
    "workclass",
    "education",
    "marital.status",
    "occupation",
    "relationship",
    "sex",
    "native.country",
]
X_train = X_train.astype({col: "category" for col in categorical_columns})
data = xgb.DMatrix(X_train, label=y_train, enable_categorical=True)
model = xgb.train(params={"objective": "binary:logistic"}, dtrain=data)
explainer = shap.TreeExplainer(model)

def predict(*args):
    df = pd.DataFrame([args], columns=X_train.columns)
    df = df.astype({col: "category" for col in categorical_columns})
    pos_pred = model.predict(xgb.DMatrix(df, enable_categorical=True))
    return {">50K": float(pos_pred[0]), "<=50K": 1 - float(pos_pred[0])}


def interpret(*args):
    df = pd.DataFrame([args], columns=X_train.columns)
    df = df.astype({col: "category" for col in categorical_columns})
    shap_values = explainer.shap_values(xgb.DMatrix(df, enable_categorical=True))
    scores_desc = list(zip(shap_values[0], X_train.columns))
    scores_desc = sorted(scores_desc)
    fig_m = plt.figure(tight_layout=True)
    plt.barh([s[1] for s in scores_desc], [s[0] for s in scores_desc])
    plt.title("Feature Shap Values")
    plt.ylabel("Shap Value")
    plt.xlabel("Feature")
    plt.tight_layout()
    return fig_m


unique_class = sorted(X_train["workclass"].unique())
unique_education = sorted(X_train["education"].unique())
unique_marital_status = sorted(X_train["marital.status"].unique())
unique_relationship = sorted(X_train["relationship"].unique())
unique_occupation = sorted(X_train["occupation"].unique())
unique_sex = sorted(X_train["sex"].unique())
unique_country = sorted(X_train["native.country"].unique())

with gr.Blocks() as demo:
    gr.Markdown("""
    **Income Classification with XGBoost πŸ’°**:  This demo uses an XGBoost classifier predicts income based on demographic factors, along with Shapley value-based *explanations*. The [source code for this Gradio demo is here](https://huggingface.co/spaces/gradio/xgboost-income-prediction-with-explainability/blob/main/app.py).
    """)
    with gr.Row():
        with gr.Column():
            age = gr.Slider(label="Age", minimum=17, maximum=90, step=1, randomize=True)
            work_class = gr.Dropdown(
                label="Workclass",
                choices=unique_class,
                value=lambda: random.choice(unique_class),
            )
            education = gr.Dropdown(
                label="Education Level",
                choices=unique_education,
                value=lambda: random.choice(unique_education),
            )
            years = gr.Slider(
                label="Years of schooling",
                minimum=1,
                maximum=16,
                step=1,
                randomize=True,
            )
            marital_status = gr.Dropdown(
                label="Marital Status",
                choices=unique_marital_status,
                value=lambda: random.choice(unique_marital_status),
            )
            occupation = gr.Dropdown(
                label="Occupation",
                choices=unique_occupation,
                value=lambda: random.choice(unique_occupation),
            )
            relationship = gr.Dropdown(
                label="Relationship Status",
                choices=unique_relationship,
                value=lambda: random.choice(unique_relationship),
            )
            sex = gr.Dropdown(
                label="Sex", choices=unique_sex, value=lambda: random.choice(unique_sex)
            )
            capital_gain = gr.Slider(
                label="Capital Gain",
                minimum=0,
                maximum=100000,
                step=500,
                randomize=True,
            )
            capital_loss = gr.Slider(
                label="Capital Loss", minimum=0, maximum=10000, step=500, randomize=True
            )
            hours_per_week = gr.Slider(
                label="Hours Per Week Worked", minimum=1, maximum=99, step=1
            )
            country = gr.Dropdown(
                label="Native Country",
                choices=unique_country,
                value=lambda: random.choice(unique_country),
            )
        with gr.Column():
            label = gr.Label()
            plot = gr.Plot()
            with gr.Row():
                predict_btn = gr.Button(value="Predict")
                interpret_btn = gr.Button(value="Explain")
            predict_btn.click(
                predict,
                inputs=[
                    age,
                    work_class,
                    education,
                    years,
                    marital_status,
                    occupation,
                    relationship,
                    sex,
                    capital_gain,
                    capital_loss,
                    hours_per_week,
                    country,
                ],
                outputs=[label],
            )
            interpret_btn.click(
                interpret,
                inputs=[
                    age,
                    work_class,
                    education,
                    years,
                    marital_status,
                    occupation,
                    relationship,
                    sex,
                    capital_gain,
                    capital_loss,
                    hours_per_week,
                    country,
                ],
                outputs=[plot],
            )

demo.launch()


Loading...

gradio/xgboost-income-prediction-with-explainability built with Gradio. Hosted
on Spaces

A simple dashboard ranking spaces by number of likes.

import gradio as gr
import requests
import pandas as pd
from huggingface_hub.hf_api import SpaceInfo
path = f"https://huggingface.co/api/spaces"


def get_blocks_party_spaces():
    r = requests.get(path)
    d = r.json()
    spaces = [SpaceInfo(**x) for x in d]
    blocks_spaces = {}
    for i in range(0,len(spaces)):
        if spaces[i].id.split('/')[0] == 'Gradio-Blocks' and hasattr(spaces[i], 'likes') and spaces[i].id != 'Gradio-Blocks/Leaderboard' and spaces[i].id != 'Gradio-Blocks/README':
            blocks_spaces[spaces[i].id]=spaces[i].likes
    df = pd.DataFrame(
    [{"Spaces_Name": Spaces, "likes": likes} for Spaces,likes in blocks_spaces.items()])
    df = df.sort_values(by=['likes'],ascending=False)
    return df

block = gr.Blocks()

with block:    
    gr.Markdown("""Leaderboard for the most popular Blocks Event Spaces. To learn more and join, see Blocks Party Event""")
    with gr.Tabs():
        with gr.TabItem("Blocks Party Leaderboard"):
            with gr.Row():
                data = gr.outputs.Dataframe(type="pandas")
            with gr.Row():
                data_run = gr.Button("Refresh")
                data_run.click(get_blocks_party_spaces, inputs=None, outputs=data)
    # running the function on page load in addition to when the button is clicked
    block.load(get_blocks_party_spaces, inputs=None, outputs=data)               

block.launch()



Loading...

gradio/leaderboard built with Gradio. Hosted on Spaces

Calculate taxes using Textbox, Radio, and Dataframe components

import gradio as gr

def tax_calculator(income, marital_status, assets):
    tax_brackets = [(10, 0), (25, 8), (60, 12), (120, 20), (250, 30)]
    total_deductible = sum(assets["Cost"])
    taxable_income = income - total_deductible

    total_tax = 0
    for bracket, rate in tax_brackets:
        if taxable_income > bracket:
            total_tax += (taxable_income - bracket) * rate / 100

    if marital_status == "Married":
        total_tax *= 0.75
    elif marital_status == "Divorced":
        total_tax *= 0.8

    return round(total_tax)

demo = gr.Interface(
    tax_calculator,
    [
        "number",
        gr.Radio(["Single", "Married", "Divorced"]),
        gr.Dataframe(
            headers=["Item", "Cost"],
            datatype=["str", "number"],
            label="Assets Purchased this Year",
        ),
    ],
    "number",
    examples=[
        [10000, "Married", [["Suit", 5000], ["Laptop", 800], ["Car", 1800]]],
        [80000, "Single", [["Suit", 800], ["Watch", 1800], ["Car", 800]]],
    ],
)

demo.launch()


Loading...

gradio/tax_calculator built with Gradio. Hosted on Spaces


🎀 AUDIO & SPEECH

Text to Speech Speech to Text (ASR) Musical Instrument Identification Speaker
Verification

This demo converts text to speech in 14 languages.

import tempfile
import gradio as gr
from neon_tts_plugin_coqui import CoquiTTS

LANGUAGES = list(CoquiTTS.langs.keys())
coquiTTS = CoquiTTS()

def tts(text: str, language: str):
    with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
        coquiTTS.get_tts(text, fp, speaker = {"language" : language})
        return fp.name

inputs = [gr.Textbox(label="Input", value=CoquiTTS.langs["en"]["sentence"], max_lines=3), 
            gr.Radio(label="Language", choices=LANGUAGES, value="en")]
outputs = gr.Audio(label="Output")

demo = gr.Interface(fn=tts, inputs=inputs, outputs=outputs)

demo.launch()

Loading...

gradio/neon-tts-plugin-coqui built with Gradio. Hosted on Spaces

Automatic speech recognition English. Record from your microphone and the app
will transcribe the audio.

import gradio as gr
import os

# save your HF API token from https:/hf.co/settings/tokens as an env variable to avoid rate limiting
auth_token = os.getenv("auth_token")

# automatically load the interface from a HF model 
# you can remove the api_key parameter if you don't care about rate limiting. 
demo = gr.load(
    "huggingface/facebook/wav2vec2-base-960h",
    title="Speech-to-text",
    inputs="mic",
    description="Let me try to guess what you're saying!",
    api_key=auth_token
)

demo.launch()


Loading...

gradio/automatic-speech-recognition built with Gradio. Hosted on Spaces

This demo identifies musical instruments from an audio file. It uses Gradio's
Audio and Label components.

import gradio as gr
import torch
import torchaudio
from timeit import default_timer as timer
from data_setups import audio_preprocess, resample
import gdown

url = 'https://drive.google.com/uc?id=1X5CR18u0I-ZOi_8P0cNptCe5JGk9Ro0C'
output = 'piano.wav'
gdown.download(url, output, quiet=False)
url = 'https://drive.google.com/uc?id=1W-8HwmGR5SiyDbUcGAZYYDKdCIst07__'
output= 'torch_efficientnet_fold2_CNN.pth'
gdown.download(url, output, quiet=False)
device = "cuda" if torch.cuda.is_available() else "cpu"
SAMPLE_RATE = 44100
AUDIO_LEN = 2.90
model = torch.load("torch_efficientnet_fold2_CNN.pth", map_location=torch.device('cpu'))
LABELS = [
    "Cello", "Clarinet", "Flute", "Acoustic Guitar", "Electric Guitar", "Organ", "Piano", "Saxophone", "Trumpet", "Violin", "Voice"
]
example_list = [
    ["piano.wav"]
]


def predict(audio_path):
    start_time = timer()
    wavform, sample_rate = torchaudio.load(audio_path)
    wav = resample(wavform, sample_rate, SAMPLE_RATE)
    if len(wav) > int(AUDIO_LEN * SAMPLE_RATE):
        wav = wav[:int(AUDIO_LEN * SAMPLE_RATE)]
    else:
        print(f"input length {len(wav)} too small!, need over {int(AUDIO_LEN * SAMPLE_RATE)}")
        return
    img = audio_preprocess(wav, SAMPLE_RATE).unsqueeze(0)
    model.eval()
    with torch.inference_mode():
        pred_probs = torch.softmax(model(img), dim=1)
    pred_labels_and_probs = {LABELS[i]: float(pred_probs[0][i]) for i in range(len(LABELS))}
    pred_time = round(timer() - start_time, 5)
    return pred_labels_and_probs, pred_time

demo = gr.Interface(fn=predict,
                    inputs=gr.Audio(type="filepath"),
                    outputs=[gr.Label(num_top_classes=11, label="Predictions"), 
                             gr.Number(label="Prediction time (s)")],
                    examples=example_list,
                    cache_examples=False
                    )

demo.launch(debug=False)


Loading...

gradio/musical_instrument_identification built with Gradio. Hosted on Spaces

This demo identifies if two speakers are the same person using Gradio's Audio
and HTML components.

import gradio as gr
import torch
from torchaudio.sox_effects import apply_effects_file
from transformers import AutoFeatureExtractor, AutoModelForAudioXVector
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

OUTPUT_OK = (
    """
    

        The speakers are
        {:.1f}%
        similar
        Welcome, human!
        
(You must get at least 85% to be considered the same person)
    
"""
)
OUTPUT_FAIL = (
    """
    

        The speakers are
        {:.1f}%
        similar
        You shall not pass!
        
(You must get at least 85% to be considered the same person)
    
"""
)

EFFECTS = [
    ["remix", "-"],
    ["channels", "1"],
    ["rate", "16000"],
    ["gain", "-1.0"],
    ["silence", "1", "0.1", "0.1%", "-1", "0.1", "0.1%"],
    ["trim", "0", "10"],
]

THRESHOLD = 0.85

model_name = "microsoft/unispeech-sat-base-plus-sv"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name)
model = AutoModelForAudioXVector.from_pretrained(model_name).to(device)
cosine_sim = torch.nn.CosineSimilarity(dim=-1)


def similarity_fn(path1, path2):
    if not (path1 and path2):
        return 'ERROR: Please record audio for *both* speakers!'

    wav1, _ = apply_effects_file(path1, EFFECTS)
    wav2, _ = apply_effects_file(path2, EFFECTS)
    print(wav1.shape, wav2.shape)

    input1 = feature_extractor(wav1.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device)
    input2 = feature_extractor(wav2.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device)

    with torch.no_grad():
        emb1 = model(input1).embeddings
        emb2 = model(input2).embeddings
    emb1 = torch.nn.functional.normalize(emb1, dim=-1).cpu()
    emb2 = torch.nn.functional.normalize(emb2, dim=-1).cpu()
    similarity = cosine_sim(emb1, emb2).numpy()[0]

    if similarity >= THRESHOLD:
        output = OUTPUT_OK.format(similarity * 100)
    else:
        output = OUTPUT_FAIL.format(similarity * 100)

    return output

inputs = [
    gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #1"),
    gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #2"),
]
output = gr.outputs.HTML(label="")


description = (
    "This demo from Microsoft will compare two speech samples and determine if they are from the same speaker. "
    "Try it with your own voice!"
)
article = (
    ""
    "πŸŽ™οΈ Learn more about UniSpeech-SAT | "
    "πŸ“š UniSpeech-SAT paper | "
    "πŸ“š X-Vector paper"
    ""
)
examples = [
    ["samples/cate_blanch.mp3", "samples/cate_blanch_2.mp3"],
    ["samples/cate_blanch.mp3", "samples/heath_ledger.mp3"],
]

interface = gr.Interface(
    fn=similarity_fn,
    inputs=inputs,
    outputs=output,
    layout="horizontal",
    allow_flagging=False,
    live=False,
    examples=examples,
    cache_examples=False
)
interface.launch()


Loading...

gradio/same-person-or-different built with Gradio. Hosted on Spaces
Status