It uses my technique of rendering a ton of particle using ThreeJS’s Points. All other effects are done in shaders!
A very basic attenuation cone is calculated by using the dot product of the light position and each vertex in a vertex shader. Then, just blend two textures based off that mask!
Not really specific to OP's post but THREE.js is already a fantastic foundation for making a program like z-brush. One of my favorite 3D sculpt tools of all time was written in THREE:
https://stephaneginier.com/sculptgl/
Note to people with money, dump it into building 3D tools written in three.js and also making the GLTF file standard better.
You'd use forwardRef from react, to wrap the child.
Within the child place the ref onto the <mesh>.
Within the parent, create refs using 'useRef' for each of the children.
Then you can use the ref in other hooks like ref.current.getWorldPosition
.
If you could do one on making custom models out of geometry, that would be awesome. I found some cool examples and would love to learn how to do it. Like this where a lion is made from native three.js geometry.
Hi, and welcome! It'd be easier to answer your question with an example of your code, but I'll try anyway.
Keeping all the particles inside your camera's frustum will involve setting your camera's attributes, as well as the positioning of your particles.
Your camera is initialized with a series of attributes that determine what it can see. Additionally, the width/height of your canvas/renderer will affect what is contained within the frustum.
The trick to keeping your particles within viewing range is to ensure that their position is inside the bounds of your camera's frustum. Otherwise you'll place particles that are cut off from view.
You could either back your camera's position away from the scene's center (change the z position) until all the particles are inside the frustum, OR you could restrict your particle positions to ensure that they are never placed outside the frustum.
That's all probably pretty confusing, so please feel free to hit me back here with questions or code examples!
You could have a look at Lee Stemkoski's Mouse & Keyboard examples here(http://stemkoski.github.io/Three.js/). But note that they tend to use old versions of THREE.js.
Also see the examples named "interactive" on the official THREE site. These are (probably?) using latest version of THREE.js
This isn't a bug, it's a security feature of the browser. The images must come from the same domain as the application was served from, with the following exceptions:
You override it at the browser level. For chrome, launch it with --disable-web-security flag
You set crossOrigin to allow another origin url, or "" to allow any origin. This must be accompanied by the CORS header Access-Control-Allow-Origin on the server that hosts the resource. For example, imgur permits cross-origin requests.
Here is a function I wrote a while ago to load textures cross-origin
loadTexture = function(url, uniform, cb){ var image = document.createElement( 'img' ); image.crossOrigin = ''; var texture = new THREE.Texture( image ); image.onload = function() { texture.needsUpdate = true; uniform.value = texture; if (typeof(cb) !== 'undefined'){ cb(this); } }; image.src = url; }
If you're running locally, only the first of these options will help. In general, it's a good idea to run a local webserver for this sort of thing. Check out grunt for a simple build tool that can run a local web server with livereload.
I've found a wonderful example of a predefined camera path movement here https://codepen.io/wiledal/pen/WvNvEq (as suggested by /u/stovenn)! I'm just trying to get what is going on, but it's a very short code and it's seemingly easy.
you do this with css2drenderer and raycasting occlusion. if you are using react this is trivial: https://codesandbox.io/s/mixing-html-and-webgl-w-occlusion-9keg6
if you want it in vanilla, here's the code for occlusion: https://github.com/pmndrs/drei/blob/master/src/web/Html.tsx#L38-L58 but it'll mean a sunken weekend for sure. the problem is that in vanillajs stuff like that cannot be easily shared and re-used because there is no common ground and something complex like that need awareness to function.
Look at this demo, but it is made with react three fiber , also this video tutorial for the same demo.
Simple solution would be to wrap the object in a group:
- offset the object into that group by the distance between the point you'd like to rotate around and the center of the object.
- rotate the group
I've setup a react-three-fiber example for you here:
https://codesandbox.io/s/basic-demo-forked-6svk2?file=/src/App.js
You need to channel the context into the canvas for this to work https://docs.pmnd.rs/react-three-fiber/advanced/gotchas or you use a router that is not context reliant like Wouter https://codesandbox.io/embed/router-transitions-4j2q2?codemirror=1
Oh yes. I actually forgot about that because my code is webpacked together and then run as a server using vue-cli.
Yes you can do that with the chrome debugging tool.
https://code.visualstudio.com/blogs/2016/02/23/introducing-chrome-debugger-for-vs-code
You could take a look at the OrbitControls code here:
http://threejs.org/examples/js/controls/OrbitControls.js
and look at the functions: onMouseWheel and handleMouseWheel. In OrbitControls the scroll wheel drives the dolly in and dolly out behaviors. But you could easilly substitute other camera behaviors.
What you're talking about is a special case of extrusion.. Namely extruding along a circular path.
There is also this, if you're still feeling confusion:
http://threejs.org/docs/api/extras/geometries/LatheGeometry.html
But that is using point lists.. not curves, so you wont be able to do arbitrary subdiv...
Yeah there is no vertical motion option in that code. There are various ways you might solve that problem.
1) Get user to hit a key to jump the avatar/camera up and use vertical collision detection to determine where it lands. See this Mr.Doob example which uses pointerlock, WASD and SPACE to jump. May be difficult for an inexperienced user to control without practice.
2) Provide special invisible cubic object "zones" at top and bottom of stairway. When avatar enters such a zone the program can automatically float it up or down the stairway. This would be fairly simple to implement using bounding boxes.
3) Provide user with FLY controls such as this Mr. Doob example. Then user can fly avatar/camera through walls and windows. May be difficult to walk straight & level along a corridor though.
4) Provide pre-defined routes/trails through the building so that user just has to choose stop/start/forward/backwards/turn left/turn right and then the program will automatically float the avatar/camera from one node to the next.
5) Allow user to switch between different modes of travel such as those described above.
Anyway it sounds like a fun project. Good luck!
Hm, not sure what do you mean by "complex models". Would this be enough?
http://threejs.org/examples/#webgl_loader_ctm
But, uh, threejs is not for modelling... it is a rendering framework, a layer over WebGL... how do you even do modelling in three.js?
Well I got something going (without using custom shaders) like this:-
Make the Sun with MeshBasicMaterial and make sure that the image file you use for the material Texture is very bright. Then create an Ambient Light in the scene. This will make the Sun appear bright.
Put your PointLight in the centre of the Sun. Make sure to give it enough brightness and range and appropriate decay pattern - see the reference link.
Other bodies (planets & moons) should be created with MeshPhongMaterial (because MeshLambertMaterial currently misbehaves with PointLights on certain tech environments).
You may need to experiment with the brightness of the planet/moon texture source images and change the level of the ambient light to get a suitable contrast between sun-facing and sun-hidden sides of planets/moons.
Its looking quite nice on my prototype, mebbe I'll build my own Solar System sometime :-).
So far I haven't considered shadows (e.g. moon on planet).
In the example, if you look at the function "createLight" it shows how to add a sphere object to a pointlight.
Some documentation here pointLight in general.
Not familiar with particles but maybe these links might help...
Examples...type particles into the text box.
Then you'll have to check THREE.BufferGeometry, find the attribute that has the vertices (usually positions), use .getAttribute() to fetch the BufferAttribute and see what's the length.
Remember that in that case are bytes, so you'll have to account for the item size (a THREE.Vector3 is 3 components in 32-bit single precision floats, 96 bytes)
EDIT: I checked and the array should still be available, so it's easier
I can't stand using webpack for simple(r) stuff so frequently on small hobby projects I'll reach for parcel instead. It has a lot to hate but it does let you get started very quickly.
This is how I exported and rendered a MakeHuman mesh with three.js
https://codepen.io/satori99/details/Xjbvbr/
Though I used Blender. This is what the JSON looks like.
> Is a background on 3d tech like Blender/Maya a key factor to understand workings of ThreeJS? (I dont have any exp in 3d domain)
Absolutely. Theres a whole domain specific knowledge set with 3D that has nothing to do with JS programming. I would suggest working on that side of things. Follow along with some blender courses and see how things work from that side. I'd bet there are many cases where a seasoned 3D artist would be able to tell you what your doing wrong without even knowing any JS.
At the very least you need the vocab if you ever want to work with 3D artists which is important for doing really amazing things.
In addition to learn 3D concepts from a more low level mathematical sense I would recommend this book. A bit dated now but it sort of goes through how you might build something like three.js (it actually goes beyond what three.js does). Don't be deterred by how it has "game development" in the title, the patterns are mostly the same.
Not as efficient as a shader but you could try changing colors per face: https://stackoverflow.com/questions/11252592/how-to-change-face-color-in-three-js and calculate the colors based on a per face distance to your "pointDark".
Another face/vertex color example on this page: https://stemkoski.github.io/Three.js/
Also,
If I wanted to implement this code onto a website (I'm using cargo.site specifically)
would I need to find a way to host my files online (the .json, .js files) in order for my model to be displayed on the webpage?
In addition to the examples, I also found these videos quite useful: https://www.udacity.com/course/interactive-3d-graphics--cs291. If you're new to 3D graphics (like me), you will probably appreciate a bit of theory in addition to the three.js stuff. I would recommend some books too, but the ones I've tried all had examples that did not run with modern versions of the library. So, the official examples and newer articles/videos are your best bet.
You are right. Next time I'll make sure to crate a codesandbox! I was able to solve my issue after using the new ScrollControls
instead and some of the code from here. Thanks a lot for offering help!
Thank you, will give the new control a try today. As for the camera movement effect, it is unclear for me what you mean by animating a wrapping group, do you mind putting it in other words? (my apologies, I'm still learning three.js), this example does the same effect I'm looking for while also implementing the scrolling. If you scroll to the bottom, you'll reach a sphere geometry that slightly moves with the mouse position. Is the geometry being wrapped in an animation and not the camera? Is that what you mean? Thanks for the help!
You seem to know what you're doing and quite frankly I don't so I figured I'd asked. I'm trying to have two buttons change the textures on the skybox as well as the sound to change. I've had a go using state changes but it's a complete mess. I have no idea if I've even done anything right but I've tried.
Able to make any heads or tails if I'm even on the right track at all? https://codesandbox.io/s/assignment-jan-again-current-22st-jan-forked-again-b20n5?file=/src/index.js
u/drcmda Thanks for that feedback! The attachArray="material" causes the whole page to be blank for some reason.
I tried removing it, the page is no longer blank, however it only maps the last texture across all three planes. I have an example here
when you deal with arrays, keys can never be random. the key gives react a clue as to the objects identify, when you swap the order it will know which object is which. all i had to do is the two places where you used Math.random() https://codesandbox.io/s/tender-galois-hq02u?file=/src/CrossSection.js
Hey! Thanks for the tip. However, I've already tried that.. it resets the animation. the mesh go to their initial point. i want to resume the animation from the position that i pause it in. you can see what I've done here -->
https://codesandbox.io/s/red-glade-u473n?file=/src/CrossSection.js
Remember that famous Joe Armstrong quote?
> The problem with OOP is it's got all this implicit environment that it carries around. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
No where is this more true than with threejs, whose structure requires everything to be connected to everything else. I've not seen a single pattern that solves it in any meaningful way. Forming classes with exposed render, update, take etc methods still creates implicit contacts.
React solves it. As an example, this is Bruno Simons first Journey stage implemented with components: https://codesandbox.io/s/threejs-journey-level-1-kheke you're looking at ~ 150 lines of clean, reusable code.
Now go to the official site, open the source tab and peek into the source maps. The difference is that large i don't want to throw around numbers. Bruno is an expert when it comes to coding patterns, but OOP to structure apps is the wrong tool period.
there is no good way to organize oop, it does not lend to making and holding app state. so no matter which pattern, you'll bleed and it absolutely will turn into spaghetti.
do it like practically everything else is done, you keep state as oop, but apps as components. for instance this is bruno simons first stage with components: https://codesandbox.io/s/threejs-journey-level-1-kheke it's clean, readable, every component is self-contained and can be tested and developed separately. this has taken over a thousand lines of code, if not more, from the original (you can look into the sourcemaps to see it).
https://codesandbox.io/s/basic-demo-forked-1cjp6?file=/src/App.js if you use react i'd suggest you also use react-three-fiber. this takes out the pain and you can control everything with react semantics.
you're using react, i'd suggest you use react-three-fiber as well because that will make it much easier. https://codesandbox.io/s/basic-demo-forked-cnqy5?file=/src/App.js you have full control over model contents this way, you can set colors and so on without either having to wait for the blender guy or having to traverse a mystery black box.
https://codesandbox.io/s/hungry-lake-0gioc I got a real basic version of this working. I started on a version with a real model but got in over my head with a lot of performance issues :| Do let met know if this helped you or to show what you came up with!
I think these implementations have a hard time keeping up to date and ray casting into real events is quite complex. If that's an option for you, react-three-fiber has a complete and working pointer event implementation out of the box. These events bubble, can be stopped, capture, just like dom events.
I made a basic vanilla implementation for click and pointer events to demonstrate the difference between vanilla and react threejs, maybe it helps, but it would still be a lot of code to be useful.
vanilla: https://codesandbox.io/s/basic-threejs-example-with-re-use-dsrvn
react: https://codesandbox.io/s/basic-react-example-with-re-use-e2zit
Hi, I'm a 3D web developer and use a lot of Three.js.
Regarding your second point, I would say this is the most difficult part of the work, making things actually look nice, and I think you are absolutely right when you say the Three.js lighting usually looks quite disappointing. Its biggest flaw is having no real support for soft shadows, making all of the shadows have very sharp looking edges.
For this reason I would highly recommend learning some kind of 3D modelling software like Blender or Cinema4D (if you don't already have experience with them) so you can bake your lighting into a scène instead of using the Three.js lighting (example), this can really elevate the looks of the entire thing, granted your scène doesn't have too many moving parts. Bruno's course has a nice introductory lesson where he goes into how to achieve this if you want to learn more.
Gave it a try and got an error that pointerlock does not recognize camera as Object3d. I am not sure why that happens (my hint is that it needs the camera after the render). Tried some stuff without much luck. Most likely I will give it another look out of stubbornness.
The other example in the comments looks like a legit approach though. https://codepen.io/tembling/pen/reZjEw
As WebVR could be the next big thing, maybe something that shows how to navigate in a 3D world? Doesn't need to be a second CloudParty. Just moving around in a 3D environment with simple 3D objects.
Combined with 3D objects that were modeled outside of your program. This can show that it is possible to let designers make objects for you to use. Maybe use something from Thingiverse.
Now you showcase not only what you can, but what can be done with these skills: WebVR, showing 3D printing objects, etc.
u/sort_of_sleepy Happy to say that I've got it working, sadly for all the wrong reasons. The root of the problem was the radius. For some reason or another, the dots were rendering in properly, but way, way too large. I reduced the size by a thousand, and it worked flawlessly.
Here's the demo: https://krunker.io/?mod=Halftone (also changed the blending mode to add)
Thank you for your thorough reply!
I've looked through their documentation, and the mention of shaders is very bare-bones (https://krunker.io/helpdocs.html#mod_resource_packs) There is also no mention of RenderTarget or FBO
There aren't many shader mods in the game currently, but this is an official example of a fragmentShader that rainbow-ifies everything. ```c
uniform float time;
uniform sampler2D tDiffuse;
uniform float RGB_Speed;
varying vec2 vUv;
void main() {
gl_FragColor = texture2D( tDiffuse, vUv ) * hue(time * RGB_Speed);
}
vertexShader:
c
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
```
I'm not sure how much this example will help, because it's all jargon to me. I think shaders are pretty lenient in Krunker, nothing suggesting the opposite has been suggested.
Sounds like it might be a lost cause, but I'll keep asking around. Thanks for the help :)
What most games do, is create some real trees, but then use fake 2-D trees which make it look more "full". Also, are you running this locally or on a server? If locally, the strain is based off of your GPU, so that will also cause some problems. Have you considered doing your coding and placing it online on websites such as Glitch.com (assuming you are using threejs with nodejs)? That way you can have it run server side?
using fog (i think, or hope, i assume it'll cut off rendering for distant objects) and otherwise using THREE.LOD. i have a react demo for that so at least you can see that it's working: https://codesandbox.io/s/re-using-geometry-and-level-of-detail-12nmp?file=/src/App.js
i would start with organising. threejs code is usually a long blob that you first need to untangle. you start by splitting it into logical units (this is for that, that is for this) which you then dump into component. the shaders should be self contained classes.
a good example is bruno simons threejs-journey, the last example: https://codesandbox.io/s/threejs-journey-ni6v4
if you look for the original you are greeted with a soup of interconnected bits and pieces. but now everything's ordered and self-contained. the shaders just extend THREE.ShaderMaterial and you can drive them declaratively (fireflies and the portal). this box will generally give you an idea how you can structure threejs.
i think what you need is AO, ambient occlusion. don't have a plain vanilla example of this but you can play with the values here https://codesandbox.io/s/hi-key-bubbles-forked-worwb?file=/src/App.js:2783-2941 to see if that would make a difference for you.
you can find free assets on sketchfab. or make them yourself in blender, but that takes a long time to learn.
the animation part is simple once you have a model, basically just lerp a pivot point: https://codesandbox.io/s/r3f-contact-shadow-q23sw
Thanks, I'm trying drei text and its very clear and works really well. One thing though it doesnt "look at the camera when I move around"
Example: https://codesandbox.io/s/r3f-troika-text-forked-m2qli?file=/src/index.js
Try to move around and the text moves, I need to look at the camera when I move around, do you think its possible with drei text?
This CodePen (not mine) might be useful: Basic Test / Three.js Cylinder/Cone | potatoDie. Basically, see the lines of code that set cone.rotation.x/y/z.
Have you tried debugging in VS Code while the app is running (i.e client graphics are displaying) in Chrome browser ?
Since posting the question and getting tips Ive been able to get that running as per:- https://code.visualstudio.com/blogs/2016/02/23/introducing-chrome-debugger-for-vs-code.
yes, that was a copy paste error in my post. fixed it above. but its still not working. can this be the problem? from what i understand vertices is a array of vectors....
FYI: The new TubeGeometry signature is: THREE.TubeGeometry ( path, segments, radius, radialSegments, closed, taper ) where taper is a function that takes a fractional position along the curve (i.e. 0 is the start, 1 is the end) and you return a multiplier to the "normal" radius of the tube at that point (i.e. .5 is half as narrow, 2 is twice as wide).
If (like me) you don't like the taper callback or having to use relative radii, this StackOverflow post gives a modification to TubeGeometry that accepts an array of radii to use at each segment.
About your bonus question: it sounds like a similar problem I hit a while ago; I answered a question on StackOverflow about it, so I'll just link here rather than repost it! If you want me to copypasta for whatever reason, just shout.
React is not a contradiction, it's just a different way to express imperative code. new THREE.Mesh()
is the same as <mesh />
. But what React makes very easy is to orchestrate contents, in ways that would cost you an arm and a leg normally. It simply is not feasible to build websites that way - and even if there are design agencies that do it, they pour insane resources into it. Their code is complex, large, and to a greater extent can't be re-used.
With React you have a component model. That makes it easy to build, abstract, re-use and share. Your problem for instance is trivial there: https://codesandbox.io/s/r3f-suspense-zu2wo because both the canvas contents and the dom nodes share the same graph.
Everything else you mentioned, scroll, glitches, that is mostly just threejs. Glitches have to do with postprocessing and shaders, scroll means just to bind offsetTop to something: https://codesandbox.io/s/adoring-feather-nk16u That smoothness you're referring to is just lerp, where values are interpolated to their nearest goal.
if i was you i wouldn't waste my time on script tags + modules. that spec is not usable in the real world. as you have realized, you can't simply import dependencies unless you're manually shifting code around.
so either use three in the old, archaic way, no imports, iife's, or ...
move to a bundling environment. it allows you set up a production ready project in seconds. you also wouldn't be on your own. right now you you can't share code, you couldn't use the code we share. and the hardship and manual labour you go through isn't worth it. if you fix that one issue you're wasting hours and days on, the next one comes right after.
if you think that's still complicated, try codesandbox: https://codesandbox.io/s/reurope-threejs-basic-kiow9 you can download the project and run it locally, too.
It definitely limited me at first, you have to have a decent understanding of react to avoid making some mistakes and unnecessary reinitializations of objects, but it has this great ability to be able to extend shit that was written for three.js like meshline https://github.com/spite/THREE.MeshLine
https://codesandbox.io/s/react-three-fiber-threejs-meshline-example-vl221
Also state management like redux or zustand is a damn near must with more complex projects that share states across multiple components
that was an interesting watch! i liked how you implemented the lasers. last year i tried something like this as well. i got it working in the end but struggled figuring out how to deal with that in particular: https://codesandbox.io/embed/r3f-game-i2160
here's how typed gltfs look like: https://codesandbox.io/s/romantic-tharp-0buje?file=/src/Model.tsx
as you see, never again will you have to traverse stuff. you can modfiy anything, change colors, materials, rip stuff out, put something new in there, add events, shadows, and so on. the IDE won't allow you to use nodes and materials that don't exist, it syntax completes props as well.
Just took a quick, non-exhaustive glance.
You can traverse your scene tree (glft.scene.children...) and set the material on any object you find. Make sure to add names in your exports to make it easiert to actually reach the elements you want to repaint.
This just picks a subtree and repaints it red: https://codesandbox.io/s/sharp-wildflower-54vim?file=/src/App.js:780-998
gltf.scene is a Threejs Group. You can traverse it's children (and their children) until you hit a mesh. Just set the material and threejs will manage the rest.
Here's a repo of the issue I'm facing,
https://codesandbox.io/s/convexpolyhedron-test-pjwrr?runonclick=1&file=/src/index.js
The only function you're looking for here is _createPhysicsModel() on line 39
Check out line 58, there's a bool there to toggle between the basic shape its currently drawing, and the attempt to create a Polyhedron instead. It will crash once you toggle that to false, allowing you to check the error in the console. I've also logged both the Geometry and the ConvexGeometry for you.
Sorry, I meant react-three-fiber
https://codesandbox.io/embed/r3f-bones-3i7iu
(I'm Kyle)
Here's an example: https://codesandbox.io/s/r3f-convex-polyhedron-cnm0s
const g = new THREE.Geometry().fromBufferGeometry(nodes.Cylinder.geometry) // Merge duplicate vertices resulting from glTF export. // Cannon assumes contiguous, closed meshes to work g.mergeVertices() // Ensure loaded mesh is convex and create faces if necessary return new ConvexGeometry(g.vertices)
well, threejs is imperative, everything is dispersed: set up, resize, effects, render logic, events, etc. each little thing is reaching everywhere in the source code, it breaks separation of concern. physics add yet another layer of complexity to it (for each primitive, two classes).
with r3f a component is a self-contained, managed and re-usable unit. all the logic is in one place: it reacts to size, state, props, physics, can subscribe to or even drive the renderloop, etc. if it unmounts all side-effects are cleaned up. it's even disposed automatically since react knows the objects lifecycle.
im new to cannon.js tbh, but we've made use-cannon in such a way that theoretically the physics engine can be replaced. it runs in a web worker. it could also be ammo or anything else now.
ps. this one was my first attempt of making a game, a space shooter: https://codesandbox.io/embed/r3f-game-i2160 but i never finished it
There's an example of a game built with react-three-fiber
on CodeSandbox. There are more linked examples on what's possible on the GitHub page.
My personal experience with react-three-fiber
was great. I would most definitely recommend it. Building a scene in a declarative way makes it easy to reason about. I don't see what would stop someone from building a game or any other interactive experience with it.
Your friend was right imo. There were only bindings available, and three especially going through so many changes meant constant maintenance for these projects. Keeping up was pretty much impossible, so they all dropped out at some point.
Reconcilers don't really know the platform, except a few basics like addChild, removeChild, how to create nodes, etc. That makes them flexible and easy to maintain. When three for instance added instancedMesh, there wasn't anything for me to do, you simply had to update your three version in package.json and it worked.
Dof relies on clipping planes. If near is 0.1 and far is 10.000 then the depth of a few objects scaled to one unit is next to nothing. Either use logarythmic depth buffer or tighten the clipping planes.
Here's an example using the postprocessing lib: https://codesandbox.io/s/r3f-ibl-envmap-normalmap-yn4up
It's react but it's the same thing, look into the effects.js file.
I've hit another snag, do you know anything about it?
I can't get refraction to work with the PBR material, do you know if it's implemented yet, or if it will be implemented at all?
Feel free to ask any help for WebAudioFont. Keep in mind, RiffShare isn't a demo toy, this is music editor. You can use RiffShare to create music and share it with friends. Open options and click ShareViaTwitter or CreateSongTinyURL. For example tiny URL I created just now
pyramid is a cone geometry with 4 sides. So after that ............ Have you ever created a website before and do you know OOP? If you can read OOP ,you can look at this
> as it turns out you can't use actual instancing via BufferedGeometry.
This example uses something called THREE.InstancedBufferGeometry() but it doesn't appear to be documented.
Sorry I overlooked that sometimes a vertices index is used in a buffergeometry.
This page gives an overview of buffergeometry in general. The use of an index is described on that page (near the bottom).
It appears the boxbuffergeometry does use an index. So for example in a simple box (e.g. a cube) there will need to be 6 vertices per face and 6 faces => 6 * 6 = 36 vertices. But the geometry data structure (in this particular case) reduces the amount of vertex information by using an index. In this case it defines position data (x,y,z coordinates for each vertex) for 24 vertices making 3 * 24 = 72 coordinate values. The index then maps one of the 24 (defined) vertices to the 36 (required) vertices. Of course it could be done even more efficiently because you only need to define the positions of 8 vertices for a box(cuboid). I do not know why 24 are defined in this case but it presumably allows code simplification or improved performance elsewhere.
If you want to get a good understanding of a particular buffergeometry you can inspect the geometry data structure in the debugger. Here is a jsfiddle which you can use - in the debugger put a breakpoint at line 129 and then float the cursor over the word "mesh" and a popup will appear which lets you drill down into the relevant data structure - in this case geometry/attributes/position/array lists the 72 individual coordinate values.
Here is the relevant documentation in case you have not seen it.
In any BUFFER geometry THREE.js assumes that all faces have three vertices and that vertices 0,1,2, map to face 0, and vertices 3,4,5 map to face 1 and so on. Here vertex 0 is the first vertex in the array of vertices.
This pattern helps to speed up certain webgl operations because there is no need to look up vertex numbers for each face. However it also means that vertices are not shared between faces so more vertices are needed for the same number of faces. It also means that you cannot easilly find out which vertices belong to a particular face.
I think you're on the right track. Having a "webGL version" just means you'll use the webGL renderer in threeJS. If its lowpoly and simple lighting, you might have the option to use the simpler canvasrenderer
I assume you can get the low-poly models out into something that threeJS can load too. This OBJ loader example does just that
You'll notice this one has some camera control built in. In the source, the code manually moves the camera around based on mouse position in each render loop. Libraries such as trackball.js can extend on this
Those would be the basic components to a scene with a model in it that can look around.
What you're really asking here is "how do I use three.js?" as there's so much in this question but I will answer in the best way I can! :) Have a look at MrDoob's three.js examples on the website. You can have Pointerlock controls (from the Pointerlock example), where the pointer is locked to the window (unless esc is pressed to unlock) which gives you FPS controls like in a PC game - http://threejs.org/examples/?q=misc_cont#misc_controls_pointerlock Then you'll need to either use the .json blender export plugin or import your object into the Three.js editor and file>export as object: http://threejs.org/editor/
You will then need to put your texture file into a directory in the same way you would if creating a standard HTML file. Then you'll need code like:
var jsonLoader = new THREE.JSONLoader();
var texture= new THREE.TextureLoader().load( "texturefolder/texturename.jpg" ); jsonLoader.load( 'models/modelname.json', function( geometry, materials ) { levelmodel = new THREE.Mesh(geometry, new THREE.MeshBasicMaterial({map: texture})); scene.add(levelmodel ); }, null);
So drag the misc_controls_pointerlock.html (and three.js and PointerLockControls) into a folder (remembering to change the file paths in the html - the bit that says "<script src="three.js">" etc) along with your model and texture, and then import the model+texture via copy-pasting the code I have given you (obviously changing the file names but only the ones in speech marks)accordingly. You will have to work out collision yourself as I'm a bit rubbish with that, but a quick look at the code will show you how to do it!
No, I am not interested in UI. I just want to define the path, that's it. In this lathe example the path is defined within a for loop with vector2 function.
I tried to add vectors tip to tip by using .add method of vector2 in order to realize the first image. (Not the identical one but a similar one)
var dots = []
var a = new THREE.Vector2(10,0); a.add(new THREE.Vector2(0,5)) a.add(new THREE.Vector2(-5,0)) a.add(new THREE.Vector2(0,5)) a.add(new THREE.Vector2(2,0)) a.add(new THREE.Vector2(0,5))
dots.push(a)
That approach did not work
The mesh is modified in GPU, and the raycaster is done in CPU: there's no feedback between the transformed mesh for the JavaScript code to check.
Options:
Doing it in the vertex shader would probably be more performant (although I haven't tried it to compare), but my guess is your problem was not setting "verticesNeedUpdate" to true after updating vertex positions. See http://threejs.org/docs/#Reference/Core/Geometry.
Based on the little I see here, what you have looks like it should produce the desired output, but you're saying it doesn't. Hmm.
If you're getting undesired output, there could be any number of things wrong, but you should start with the simple stuff first. Maybe it's a parsing issue: try wrapping each JSON variable with parseFloat()
and see what happens.
If you exhaust every simple thing you can think of (I'm leaning toward the data being screwed up or side effects from use of non-pure functions), then try the quaternion rotation approach. Moving beyond complex numbers is scary, but it's way more efficient, and avoids gimbal lock in 3D graphics.
I've actually been trying to follow this example http://threejs.org/examples/#webgl_animation_skinning_morph
but when I try to make an animation, I get an error (undefined is not an object) when it tries to access geometry.animation.
Deprecated.. not depreciated. There are examples of skeletal animation on the three.js site. I'm on mobile right now but if I get a chanxe, I'll post some of the ones I've used before..
edit: Use this example: http://threejs.org/examples/#webgl_animation_skinning_blending
I believe you can fix this issue by attaching the camera to another THREE.Object3D and then using that object for the x axis rotation separate from the camera's rotation.
Something like:
var cameraDolly = new THREE.Object3D; cameraDolly.add(camera)
// Rotate with mouse movement. cameraDolly.rotateX(-mouse.moveY * rotSpeed * speed); camera.rotateY(-mouse.moveX * rotSpeed * speed);
That may work but you'll have to try it out. As for the camera's up position you could might be able to flip the up vector with an if statement like:
if(camera.rotation.x >= Math.PI){camera.up.set = (0,-1,0);} else {camera.up.set = (0,1,0);}
I'm still pretty novice with this stuff but I figure I can learn more by trying to help or something like that. Have you looked at THREE.FlyControls?
Yeah, I want the brick to fall when the mouse is released. My OP is badly worded. I can't get the code from this example to work when Physi.js is involved.
There isn't one in basic three.js, I don't know about threex.
There's a reason why there isn't one, since it's easy to create and that way there isn't a strict way of doing interaction.
You can use THREE.RayCaster to detect what object is being "looked at", and set a timeout for say 1000ms. If the next raycaster is a different object, cancel the current timeout and set a new one for the newly selected object. If the timeout is completed, your "lokkedUponObjectForASecond" function will be called.
Ah geez, it might be a bit hard without that.
Yeah, there are ready examples easily available. This example right here would solve most of your problems, youd just have to replace a few meshes.
http://threejs.org/examples/webgl_shadowmap.html
Find some cheap guy in china? :)
The best way I've found to ensure that the exporter I am writing works well is to test it against the ThreeJS Editor. One can also export scenes from the editor, so it is a great way to test different kinds of objects and see how they come up in the resulting JSON. For now, I've gone for creating one JSON Scene file. I was able to write this up yesterday, reusing some of the work I did for the Object format, so it was not all 100% from scratch.
What you want to do is constantly update the vertices of the Geometry according to the current waveshape you get from soundcloud. I think that's the easiest way (correct me if I'm wrong).
in this example
if you inspectelements and go to the script source "OBJMTLLOADER.js" and in that file either search "group" or go to line 117 you will find the THREE.Group() I'm talking about.
Since you are an experienced user can I ask what kind of workflow you use? I have tried to export a modeled and textured object from cinema4d to ThreeJS but am experiencing allot of problems. Cinema4d doesn't export obj files with .mtl textures and the workarounds don't work. So I'm curious what the recommended way of doing this would be.
Perceived scale on screen is a result of the perspective projection process: an object closer to the camera looks bigger than another further away.
You can tackle this problem in many ways, but may be your best option is to use a THREE.OrthographicCamera and change the size of the objects based on distance to camera.
If it's a 3D shape that you need, it's called a torus.
You can create it using the geometry primitive THREE.TorusGeometry.
blender can store textures externally, it's probably that the blend file doesn't include them. better export as a gltf with separate assets and then sqoosh the hell out of those textures (resize + compression). and finally: npx gltf-pipeline -i model.gltf -o model.glb --draco.compressionLevel=10
I haven't read the whole thing in depth yet, but good work. I'm doing a similar style of coding and blogging as an evangelist at my company. Check It Curious to see what I can learn from your blog.
macOS should be pretty similar to Linux (there are some minor differences).
I would recommend "testing" using laravel or similar to quickly check if the examples do run ok on your system. This will confirm things like file permissions and folder structure.
To get it running in atom, should be fairly similar, but Im not sure of the servlet mechanisms to drive the html. Ie, you need to put the three folder in the correct position the hierarchy for atom to correctly serve it.
Also, check your debug logs in the browser you are using - usually F11 to open debug window. You may also be seeing CORS issues.
Generally I havent had too many problems running threejs in a wide variety of setups. I have run these examples in Electron and nwjs (https://nwjs.io/) so Im pretty sure it should be fine in atom (cant be certain though).
Hope this helps.
Ah, I see. I've used noise for creating terrain heightmaps. I imagine you're going for a Swiss cheese look, so you'll probably need a 3D map for that.
You've probably seen this, because it samples the code in your OP. You can probably do a workaround where you pass in an arbitrary number of 2D textures and interpret them as a single 3D texture. Unfortunately if you're hacking on textures you won't get any of WebGL's texture benefits like mipmapping and efficient caching.
Edit: Though I think a 3D texture is really overkill for generating a rock with holes in it. You can probably generate a 3D lattice (hexagons or some non-rectangular shapes) and use that to display something decent looking, then provide a noise texture than can tile over the surface. Look at this: http://en.wikipedia.org/wiki/B%C3%A9zier_surface
Thank you so much!
I got too caught up in trying to find a tool that could connect them directly. I now see the bigger picture.
WebSockets seems like a good way to go... But at a glance, it seems to be a bit over my head. I'm going to first try this ipc-main module, since I'm more comfortable with Electron.
I tend to forget this idea when it comes to personal/hobby projects: "having that separation of concerns in place gives you great flexibility on the frontend/rendering engine." Thanks for reminding me. :)