Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Discussion options

I am trying to use pygfx in order to render a terrain. The terrain has been triangulated from a heightmap (so I have vertices and normals) so I am trying to generate a mesh in pygfx. I have not found any mesh under geometry.

Incidentally, the "putting together example" in the pygfx guide is incorrect. It includes a reference to "BoxGeometry" when it should be "box_geometry".

Any help will be most welcome. I am really interested in using this package.

You must be logged in to vote

Replies: 4 comments · 14 replies

Comment options

Try this example as a starting point: https://github.com/pygfx/pygfx/blob/6e1b3bf978bba92b7c217365fca83955c33d481a/examples/triangle.py

Thanks for your feedback. We're still in beta so our docs are not a big priority at this point. Once the API stabilizes and the base feature set is complete we'll make another pass on the documentation.

You must be logged in to vote
5 replies
@mllobera
Comment options

Thank you!

@mllobera
Comment options

I managed to get some output.
image

Is there a way to control camera orientation? is there a a way to give it a viewing direction?

@Korijn
Comment options

Of course! Take a look: https://github.com/pygfx/pygfx/blob/6e1b3bf978bba92b7c217365fca83955c33d481a/examples/orbit_camera.py

@berendkleinhaneveld
Comment options

Looks cool! You can set the x, y and z attributes on the position attribute on the camera: camera.position.y = ...

If you want to control the camera with the mouse, take a look here:

camera = gfx.PerspectiveCamera(70, 16 / 9)

@mllobera
Comment options

Hi,... here is the latest
image
I was able to control the surface with the mouse but I would still like to be able to specify a view direction (through a vector). Apparently you can give it an object to view at but not a direction (which I found a bit odd). Another issue is the lack of control over the lighting (I believe that there is an issue regarding this?). I take that the AxesHelper are located by default at (0,0,0)? (within geosciences the origin would typically be at the bottom left). Found a few more typos (or at least parts that were a bit confusing). Still hoping to use this to develop some new tools I am interested in.

Comment options

Hi guys,
I was wondering if there is a way to gain access to the depth buffer?
I have a few questions regarding the near future development of pygfx and was wondering whether this is the right place to ask them?
Thank you

You must be logged in to vote
6 replies
@mllobera
Comment options

Thanks for the answer.
Is there any work around for this? My intention was to use pygfx as a core component for a python package I was starting to develop. The idea behind this package is to be able to render and analyze digital landscape scenes in a very rapid way. Currently, these types of analyses have been conducted, in a very limited way, using tools such as Geographic Information Systems (GIS). The main problem with these is that they do not handle 3D objects well (or not at all) and most of their calculations (e.g. visibility calculations) are computed at a software level. Instead, my ideas was to use the graphic card to do all the heavy lifting (rendering terrain plus any built environment) and then retrieving or deriving quantitative information such as depth of view, visual angles, etc. I want to stick to python because I already use it for so many other things (stats, databases, graphics, etc) and I was very relieved when I saw that you were working on something similar to threejs.

I was checking through the different issues that have been raised just to get acquainted with the project and I saw that you mentioned the possibility of adding the depth buffer possibility to the renderer.snapshot()?

Do you or any of you have any suggestions or ideas? I have a few months coming up soon that I wanted to dedicate to developing this. While I have a robust knowledge of python I have no knowledge on WebGPU and limited experience contributing to a larger collaborative project.

@almarklein
Comment options

This is a good question. I think we should create a separate issue for this, to list the possibilities. I'm investigating a bit on what the limitations are.

@almarklein
Comment options

See #320

@mllobera
Comment options

Thank you for the interest! I have been looking around and it would appear that this is something that is quite common amongst other implementations. The idea of tapping into the depth buffer, and/or any other information associated with the rendered image, is something that ha been around for a while but surprisingly (at least to me) has not been exploited to its fullest. Being able to derive real world information directly from the rendered scene (and use it to derive additional information) would enhance tremendously certain calculations that currently are just not done (I think because they are too expensive).

@almarklein
Comment options

It's true that it's common to be able to read the depth buffer. But it sorta fell victim to our advanced handling of transparency.

Comment options

I'm sorry to see this so late. I wonder if the solution here ((#320 (comment))) can help you?

You must be logged in to vote
2 replies
@mllobera
Comment options

I am not entirely certain what am I supposed to be looking at (probably not familiar enough)?

@panxinmiao
Comment options

I added a "MeshDepthMaterial" example which can output depth grayscale image.

Comment options

You must be logged in to vote
1 reply
@panxinmiao
Comment options

Sorry for the late reply.

Thanks for the clarification. Would converting it into a grayscale image scale the range of all output to 0-255? I think this might have an impact on depth resolution which is critical, at least, for my purposes.

It's true, I forgot that. The default output color texture format is rgba8unorm, which is really not suitable for depth images.

I think it may be a better choice to use the lower level wgpu.py library to build a specific rendering pipeline.

For pygfx, to achieve these, we need to customize its rendering process a little.
Maybe this code is a workaround:

"""
A material for drawing geometry by depth
"""

import numpy as np
import cv2
import wgpu
from wgpu.gui.offscreen import WgpuCanvas
import pygfx as gfx
from pygfx.renderers.wgpu import Binding
from pygfx.renderers.wgpu.meshshader import MeshShader
from pygfx.renderers.wgpu._blender import BaseFragmentBlender, OpaquePass


class DepthMaterial(gfx.MeshBasicMaterial):
    pass


@gfx.renderers.wgpu.register_wgpu_render_function(gfx.Mesh, DepthMaterial)
class DepthShader(MeshShader):

    # Mark as render-shader (as opposed to compute-shader)
    type = "render"

    def get_resources(self, wobject, shared):
        geometry = wobject.geometry

        bindings = {
            0: Binding("u_stdinfo", "buffer/uniform", shared.uniform_buffer),
            1: Binding("u_wobject", "buffer/uniform", wobject.uniform_buffer),
        }
        self.define_bindings(0, bindings)

        vertex_attributes = {}

        vertex_attributes["position"] = geometry.positions

        self.define_vertex_buffer(vertex_attributes, instanced=self["instanced"])

        return {
            "index_buffer": geometry.indices,
            "vertex_buffers": list(vertex_attributes.values()),
            "instance_buffer": wobject.instance_infos if self["instanced"] else None,
            "bindings": {
                0: bindings,
            },
        }

    def get_pipeline_info(self, wobject, shared):
        # We draw triangles, no culling
        return {
            "primitive_topology": wgpu.PrimitiveTopology.triangle_list,
            "cull_mode": wgpu.CullMode.none,
        }

    def get_render_info(self, wobject, shared):
        geometry = wobject.geometry

        n = geometry.indices.data.size
        n_instances = 1
        if self["instanced"]:
            n_instances = wobject.instance_buffer.nitems

        return {
            "indices": (n, n_instances),
            "render_mask": 3,
        }

    def get_code(self):
        # Here we put together the full (templated) shader code
        return self.code_definitions() + self.code_vertex() + self.code_fragment()

    def code_vertex(self):
        return """
        struct VertexIn {
            @location(0) position: vec3<f32>,
        };

        @stage(vertex)
        fn vs_main(in: VertexIn) -> @builtin(position) vec4<f32> {
            let u_mvp = u_stdinfo.projection_transform * u_stdinfo.cam_transform * u_wobject.world_transform;
            let pos = u_mvp * vec4<f32>( in.position, 1.0 );

            return pos;
        }
        """

    def code_fragment(self):
        return """
        struct FragmentOutput {
            @location(0) color: vec4<f32>,
            @location(1) pick: vec4<u32>,
        };
        @stage(fragment)
        fn fs_main(@builtin(position) position :vec4<f32>) -> FragmentOutput {
            var out: FragmentOutput;
            let depth = 1.0 - position.z; // Invert depth  TODO: logarithmic depth
            out.color = vec4<f32>(depth);
            return out;
        }
        """


class DepthFragmentBlender(BaseFragmentBlender):

    passes = [OpaquePass()]

    def __init__(self):
        super().__init__()

        usg = wgpu.TextureUsage

        self._texture_info["color"] = (
            wgpu.TextureFormat.r32float,
            usg.RENDER_ATTACHMENT | usg.COPY_SRC | usg.TEXTURE_BINDING,
        )


canvas = WgpuCanvas(640, 480, 1)
renderer = gfx.WgpuRenderer(canvas)
renderer._blender = DepthFragmentBlender()
camera = gfx.PerspectiveCamera(45, 640 / 480, 8, 12)
camera.position.z = 10

t = gfx.Mesh(gfx.torus_knot_geometry(1, 0.3, 128, 32), DepthMaterial())

scene = gfx.Scene()
scene.add(t)

canvas.request_draw(lambda: renderer.render(scene, camera))

def snapshot(renderer):

    device = renderer._shared.device
    texture = renderer._blender.color_tex
    size = texture.size
    bytes_per_pixel = 4

    data = device.queue.read_texture(
        {
            "texture": texture,
            "mip_level": 0,
            "origin": (0, 0, 0),
        },
        {
            "offset": 0,
            "bytes_per_row": bytes_per_pixel * size[0],
            "rows_per_image": size[1],
        },
        size,
    )

    return np.frombuffer(data, np.float32).reshape(size[1], size[0], 1)

if __name__ == "__main__":
    canvas.draw()
    im = snapshot(renderer)
    print(im.shape)
    cv2.imshow("im", im)
    cv2.waitKey(0)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
5 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.