I made a userscript that adds a transparent canvas on top of the game canvas and draws the balls in 3D using http://threejs.org/.
It is buggy as hell right now (e.g. leaving ghost balls in the field and not rendering late joiners in 3d), but you can try it if you'd like:
https://github.com/keratagpro/tagpro-balls-3d/tree/master/build
I'll be updating this in the coming days if there's more interest. :)
It's built with three.js, off of the voxel painter example. You need WebGL for it to work, so use a modern browser.
The edit mode is sort of like a very simple mcedit. You can also import .schematic files.
The view mode tries to be like a Lego instructional building diagram. Use the numbered buttons to just show one layer at a time.
There are still a lot of blocks that don't render correctly, (or at all!) but most of the important ones work well enough.
Let me know what you think!
I think this doesn't even capture how cool Three.js is because it is using lame graphics primitives. Take a look at this example of a nearly photo-realistic head and this example of nearly photo-realistic stearable race-car.
Realize these are done entirely in Java*script*, there is no executable for these and they run fine on the new generation of phones.
Building websites is not building applications. You're looking at the wrong jobs. That's a developer position.
I actually don't think that's a lot of requirements for what sounds like a game or some other large 3D application.
I would recommend taking a look at three.js if you're interested in doing 3D on the web and are just learning Javascript. It's a massively popular 3D library with a ton of support.
While it handles a lot of the lower level WebGL stuff for you, if you do have a curiosity to learn the lower level implementation it is open source, so once you get the hang of the basics you can always dig into the code.
Good luck! I've done a bit of work with three.js in the past, so if you do decide to use it, feel free to reach out with questions.
Three.js - a wrapper for WebGL (with fallback renderers for Canvas). It's not too hard to get started with, and it allows some really genuinely amazing results.
I personally like ThreeJS. It scales very well, from desktop to mobile browsers (Android, iOS).
For web games I tend to distrust cross-compiled solutions (i.e. Unity WebGL) since they're impossible to optimize or experiment with at the Javascript level. Also, Unity WebGL is completely broken on Android and iOS.
OP just changed the text around and added in the music link, leaving everything else how it was. He also got the web server and domain.
If you're interested in the coding side of it, the author of the project used the Three.js library, which means you're going to need to know javascript to be able to make something like this. I recommend checking out the /r/webdev thread that I linked as they go into more detail on how this is done.
I'm dipping into 3D for the first time with Javascript using a nice webgl library called threejs.
Here's some similar examples that come to mind and you can view the source to play with them and tweak them:
http://oos.moxiecode.com/js_webgl/water_noise/
http://threejs.org/examples/webgl_shaders_ocean.html
I picked up a couple of books on threejs and started experimenting with all the code and projects out there.
I feel like it's a bit easier to wade into with javascript than I suspect some of the bigger languages would be, and all you need for a bare minimum is a notepad and a browser.
I feel like I at least am getting familiar with the concepts of geometry, Shaders, UVmaps, x/y/z, materials, and all sorts of 3D stuff I never worried about with system programming or web application programming.
Haxe is a nifty little language, a little similar to actionscript 3. Highly recommended. (I'd recommend ash framework too for entity component system). Also, webGL is available straight from Javascript. Check out three.js.
I don't know about directx but you could do it in javascript using three.js which leverages webgl. Webgl has gpu access so it can do some pretty fast rendering and if you want it to be a stand alone desktop app and not web based you could package the whole thing up with node-webkit.
Or more specifically http://threejs.org/examples/#webgl_effects_oculusrift
And https://github.com/rogerwang/node-webkit/wiki/Getting-Started-with-node-webkit
We recently decided that the wiki would be used as a developer reference actually, the correct place would be the docs. Admittedly the manual section is very sparse at the moment, but it does already have an equivalent to this article. Feel free to improve it if you like - there's an "edit" button on the top right of the page.
http://threejs.org/docs/#Manual/Getting_Started/Creating_a_scene
Just a heads up Flash is kind of a dying language/medium (there are lots of HTML5 things like canvas/webGL that supersede it). Java will be hard to dive into if you have no prior programming experience.
If you want a "gamified" way to learn programming, maybe something that's visual, why don't you learn to build a website? Codecademy is a good resource that provides feedback quickly as you're learning (and building) something. Their course on interactive websites might be a good start.
You mention animations/simulations. You also mention Flash. Why Flash? Because it's available on the web? Take a look at three.js to see what you can do with a "modern" language for animations on the web. No offense to Flash, it's just... on it's way out.
It's an actual 3D WebGL canvas rendered via three.js. If you have programmatic control of the camera in the 3D scene you can just translate the mouse position to the x,y,z coordinates of the camera.
Not in pure HTML5, no.
Your best bet is to use webGL and a wrapper like three.js. I've never used it, personally, but from the documentation it looks like you can load in pre-existing meshes, or generate them on the fly (see /r/proceduralGeneration for some algorithms and inspiration)
Data is from here for 54,051 airport locations: http://ourairports.com/about.html#credits
Using the ubiquitous Three.js (http://threejs.org) and a neat JavaScript package that calculates the position of Voronoi cells from a given data set. https://github.com/gorhill/Javascript-Voronoi
Very much an initial experiment / work in progress but wanted to share nevertheless.
Hi, and welcome! It'd be easier to answer your question with an example of your code, but I'll try anyway.
Keeping all the particles inside your camera's frustum will involve setting your camera's attributes, as well as the positioning of your particles.
Your camera is initialized with a series of attributes that determine what it can see. Additionally, the width/height of your canvas/renderer will affect what is contained within the frustum.
The trick to keeping your particles within viewing range is to ensure that their position is inside the bounds of your camera's frustum. Otherwise you'll place particles that are cut off from view.
You could either back your camera's position away from the scene's center (change the z position) until all the particles are inside the frustum, OR you could restrict your particle positions to ensure that they are never placed outside the frustum.
That's all probably pretty confusing, so please feel free to hit me back here with questions or code examples!
You could have a look at Lee Stemkoski's Mouse & Keyboard examples here(http://stemkoski.github.io/Three.js/). But note that they tend to use old versions of THREE.js.
Also see the examples named "interactive" on the official THREE site. These are (probably?) using latest version of THREE.js
In order to read values out from a fragment you can just output the texture to an array of pixels, I'm not sure if Three supports that or not, you may have to do a little bit of webgl voodoo to make it work. check out these examples for some inspiration http://threejs.org/examples/?q=gpgpu
You don't need to use a compute shader. What you wanna do is create an array then call gl.readPixels() and it'll fill your array with the values from your bound texture containing the output of the fragment shader. it is described here https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/readPixels
But this is fairly advanced and might involve a little hackery. You would definitely have to stray from Three.js to generate your heightmap. So this may not be an option for you.
As far as the look of your output. In that post they return 1-distance like in my code above, that will give you sharp mountain shapes rather than blobs. They also layer the noise with increasing frequencies and lowered amplitudes. which makes it seem highly detailed and breaks up the smooth faces. I think that is what you are looking for.
If you choose not to use the gpu to generate your values you still have another decent optimization. The slowest part of the algorithm is the fact that you have to compute a hash value nine times per pixel, one way to speed that up is to use an array of random values then write a much simpler hashing function which simply picks a value from that array. it could be as simple as array[(x+y)%array.size].
You will need to learn JavaScript first, and get used to the building blocks of computer programs (variables, loops, branching, functions, etc). This will take a month or two, if you work on it every day. Free resources are at: /r/learnjavascript
Then! After you know the basics, you can add in a library called three.js, this is the most popular way to use 3D models with JavaScript in a web browser.
Look at the sample code here, that's what you need to write to put one cube on the screen. So after you work at it a while, you can understand all that code, and be able to load other objects (including pre-made objects).
And there's a sub for help at: /r/threejs
You'll probably want to use something like THREE.js and render the points on the GPU by using a particle system (now called Points in the latest version).
Here's a possible approach: why not prototype the text part in HTML first? It should be easy enough to migrate the story/choices to another engine when the story part is done. That way, all that will remain is to focus on how to make the text part look good & how to implement the trippy gameplay.
Plus by then you'll probably have a better idea as to what engine could satisfy your needs (tip: even three.js could be a valid choice for a dumb walking simulator) + you can still decide at any point to reduce your scope to only the text adventure, or have an alternate approach for the trippy sequences (like dumbing it down to 2D mini-games or whatever could be done in JS).
Ouch. This is a big one, would help if you know what you'd like to do precisely. But first say, you're either doing 2d or 3d. For 2d you might want to brush up on some math, if you do a lot of web like javascript, I'd recommend jumping into something like d3. This might be what you're looking for, if your focus is on visualizing data.
If you want to focus on 3d, well, then it's time to learn a fair bit about linear algebra, as there are quite a few things to know before you get into things too much. Again, if you're more comfortable with javascript, there's tons of stuff for webgl, in this case I suppose three.js is a good start.
Of course, if you want to go balls out crazy, it's always worth checking what the demo scene is doing.
Might help if you edit your post and be more descriptive of what EXACTLY is the thing you're interested in.
Hi! Have a dig into three.js and examples on codepen – the effect you're looking for is quite common actually.
edit: this seems to be the effect in example 1, http://codepen.io/MarcoGuglielmelli/pen/lLCxy
I've started working on an "unofficial" jam project. I've been learning WebGL recently, so I've mostly just been toying around, but I've gotten a basic terrain generator up and running (screenshot).
I'm planning on adding infinite scrolling this afternoon, then I have a bunch of plans for making things more procedural and less random. In particular, I want to lay down a road network to make the terrain more traversable.
The shaders in the screenshot above are also fairly preliminary. I've played with them just enough to know that I'm sort of capable of writing custom shaders, but there's way too much other stuff to be done to be fiddling with them.
I'm really liking WebGL as a platform (supported by three.js). It's quite cool to see this demo running comfortably in a browser, though again I still have some work to do to make sure that the procedural generation aspect doesn't slow things down.
I don't know if there will be much "game" aspect to this anytime soon. My checklist of things to do mostly involves various procedural modeling tricks I want to try out, but if I ever manage to get through those, I was thinking of gamifying the demo by tracking various records you could break by exploration (e.g. tallest mountain visited, fastest travel across several zones, etc.)
Because you're probably rendering with raytracing, and a CPU is better suited for that than a GPU.
This is a nice demo that shows how raytracing isn't that parallelized.
yeah, you are right. We want the whole thing to be very browsable from the start, so we need lots of content. We have to delay the launch until then. The problem is that someone has to read every line of code from the developers. But its fun reading through code.
We had examples on the page, but I thought I should remove them. So its not too confusing. I will post them here and put them back online next week. I think you are right.
I share some examples here: Mozillas collection of WebGl Stuff for VR: mozvr, all Three.js examples would work: three, What I personally love most is this: learningthreejs
I hope someone does some kind of Virtual Reality Room where you can walk to little screens each with a different Family Guy/Simpsons episode. If nobody does that, I will have to write that. Thats the whole reason I am on the project lol.
Thanks!
That... is a problem with WebGL. To confirm, this is the WebGL framework that I use. Can you run this or any of the other examples? http://threejs.org/examples/#webgl_animation_skinning_morph
If you cannot run it/them, then the problem is somewhere between your OS, web browser, drivers, and GPU.
Being able to do 3D directly in-browser is very new, to the point that IE 10 and below have no support for it whatsoever, and support is Firefox is iffy.
It's a new standard called WebGL, which lets Javascript access the computer's graphics card.
OP's link is a proof-of-concept that WebGL works, evidenced by the page being in the /threejs/ folder. Three.js is currently the most popular library for implementing WebGL in a website. Check out the site, their front page has a ton of other examples of how 3D is being used. It's a whole new world for web design, and I'm excited to see what people do with it.
I responded to the other commenter, explaining this.
A screenshot of a Rift-enabled Web app would look like any other Rift-enabled app. Check it out for yourself–no download required. The new APIs in this build just give you (as developer) VR-specific functionality, like rendering only to the HMD, getting FOV info, etc...
You can start with http://threejs.org/ as a good way to get your feet wet dealing with 3d. This particular site looks to be doing things in a more specialized way than that, but to begin 3d in browser, I found this to be a palatable entrance point. It enables the same sort of effects and uses WebGL plus it's reasonably possible to work with in terms of getting your feet wet without a lot of experience working with 3d objects.
Note the other comments in this thread - this is fun to mess around with, but for commercial stuff you are paying a serious price to deliver your message this way and alienating a lot of visitors. Cool as hell, though, eh?
Dude Doom 3 is 10 years old this year. It's only using OpenGL 1.4 at best.
Doom 3 was very good at squeezing out the best of what was available at the time (as id always was); but OpenGL has changed drastically since then.
A good example of modern gl would be gl 3+, pure glsl, with no fixed functionality, and browny points for nice use of tessellation shaders.
Infact even looking at recent javascript engines in webgl would be a better example. Webgl is based on gl 2, but with no fixed functionality, so bares a stronger resemblance to gl 3.
three.js is a worth a gander. There's a deferred renderer in the examples, too
Hey thanks a lot!
I'm a programmer and I've always wanted to make a game. This whole project just sort of... happened. It started off as me playing around with some open source libraries here and there for fun and I just kept pushing it further and further and this is where I'm at now.
The language the engine is written is JavaScript. I've used a wide range of libraries in the process of development, even swapping them out entirely for others at times. I'm in a really stable place now though and don't see any major changes to the tools being used for quite some time. The main ones I'm using are three and react. I also use a lot of custom code and other open source libraries which I've developed or contributed to myself.
You could take a look at the OrbitControls code here:
http://threejs.org/examples/js/controls/OrbitControls.js
and look at the functions: onMouseWheel and handleMouseWheel. In OrbitControls the scroll wheel drives the dolly in and dolly out behaviors. But you could easilly substitute other camera behaviors.
I've used three.js for a 3D solar system simulator that had some similar effect for the background. Not too tough to get into if you've never worked with 3D graphics and OpenGL before.
this looks nice to get started: http://humaan.com/web-3d-graphics-using-three-js/
and here you can view the source code of each animation which is also nice: http://threejs.org/examples/#webgl_animation_cloth
What you're talking about is a special case of extrusion.. Namely extruding along a circular path.
There is also this, if you're still feeling confusion:
http://threejs.org/docs/api/extras/geometries/LatheGeometry.html
But that is using point lists.. not curves, so you wont be able to do arbitrary subdiv...
Yeah there is no vertical motion option in that code. There are various ways you might solve that problem.
1) Get user to hit a key to jump the avatar/camera up and use vertical collision detection to determine where it lands. See this Mr.Doob example which uses pointerlock, WASD and SPACE to jump. May be difficult for an inexperienced user to control without practice.
2) Provide special invisible cubic object "zones" at top and bottom of stairway. When avatar enters such a zone the program can automatically float it up or down the stairway. This would be fairly simple to implement using bounding boxes.
3) Provide user with FLY controls such as this Mr. Doob example. Then user can fly avatar/camera through walls and windows. May be difficult to walk straight & level along a corridor though.
4) Provide pre-defined routes/trails through the building so that user just has to choose stop/start/forward/backwards/turn left/turn right and then the program will automatically float the avatar/camera from one node to the next.
5) Allow user to switch between different modes of travel such as those described above.
Anyway it sounds like a fun project. Good luck!
I don't know about their cloud solution (it's not something I'm all that enthusiastic about honestly), but looking at OctoPrint, it looks a lot less usable than what Tiko's building (which you can check out yourself, due to Tiko's aforementioned non-fastidiousness - go to http://print.tiko3d.com/, click the logo at the top, then click the menu in the upper right hand corner and go to "Account settings", then drag and drop an STL file into the window).
This is the sort of tool I'd appreciate having, more than being able to fiddle with the minutia of the nozzle's control settings: a 3D representation of the print bed, and the ability to arrange the models I wish to print within the printable area. (I mean, what would be really awesome would be a straight-up integration of OpenJSCAD mixed with something like http://threejs.org/editor/, but I'd settle for what Tiko's doing.)
As far as I can tell, they used three.js to make this happen.
There should be tutorials on how to achieve such effects. Here is one. (I didn't read it, simply found it on google).
Clicked on edit on one of the projects and holy shit, I somehow never knew three.js had a whole online editor. That is amazing, three.js blows my mind.
Hm, not sure what do you mean by "complex models". Would this be enough?
http://threejs.org/examples/#webgl_loader_ctm
But, uh, threejs is not for modelling... it is a rendering framework, a layer over WebGL... how do you even do modelling in three.js?
Well I got something going (without using custom shaders) like this:-
Make the Sun with MeshBasicMaterial and make sure that the image file you use for the material Texture is very bright. Then create an Ambient Light in the scene. This will make the Sun appear bright.
Put your PointLight in the centre of the Sun. Make sure to give it enough brightness and range and appropriate decay pattern - see the reference link.
Other bodies (planets & moons) should be created with MeshPhongMaterial (because MeshLambertMaterial currently misbehaves with PointLights on certain tech environments).
You may need to experiment with the brightness of the planet/moon texture source images and change the level of the ambient light to get a suitable contrast between sun-facing and sun-hidden sides of planets/moons.
Its looking quite nice on my prototype, mebbe I'll build my own Solar System sometime :-).
So far I haven't considered shadows (e.g. moon on planet).
In the example, if you look at the function "createLight" it shows how to add a sphere object to a pointlight.
Some documentation here pointLight in general.
I'm guessing you won't see a ton of responses here because that is one super-elaborate project that would take some time to delve into to fully answer your question - I've taken a look at the source code and I'm guessing it would take me at least 5-8 hours to really understand everything that's going on in that experiment. To your question, I find his vertex displacement pretty nontrivial and I think trying to emulate it without plenty of WebGL experience would be pretty daunting / frustrating...
I'd suggest you start with studying Raycasting in threeJS- the process of projecting a 2d mouse location into a 3D coordinate system - and take it from there. Once you realize that whatever's happening happens when the mouse hovers a certain area, you at least know where to find the functionality you want to understand in the source code - which, by the way, isn't easy to get to and is minified to hell and back so you'll probably want to import it to your favorite text editor and auto format it.
Hope this helps!
EDIT: in the interest of my comment being slightly less useless:), good places to start are:
The Raycaster class in the threejs docs
Not familiar with particles but maybe these links might help...
Examples...type particles into the text box.
I can confirm that Nightly is wonky in various ways right now. The aframe.io examples work best but even they appear shrunken inside the DK2.
The Three.js WebVR examples appear offset and the webvr-boilerplate demo doesn't display in the DK2 at all.
I think you've just caught the WebVR folks at an awkward time. They're in the midst of deploying a brand new version of the APIs, so the dust is still settling. Check back in a few days!
Yup - super easy to do:
You might also want to check out three.js if you only want to get up and running on Google Cardboard.
If you're only interested in the Rift and don't feel like going onto the web, this would be a super simple project to set up in Unity, also.
Then you'll have to check THREE.BufferGeometry, find the attribute that has the vertices (usually positions), use .getAttribute() to fetch the BufferAttribute and see what's the length.
Remember that in that case are bytes, so you'll have to account for the item size (a THREE.Vector3 is 3 components in 32-bit single precision floats, 96 bytes)
EDIT: I checked and the array should still be available, so it's easier
Hi, german dev here, have played around with pointclouds extensively, published a software for recording and playback pointclouds (see www.multimedialab.de)
There is a pointcloud viewer for Unity available in the asset store, not sure what you mean by manipulating them. I am interested in getting three.js to work with pointclouds as well, there is an exmaple of this available online,,see the kinect demo (http://threejs.org/examples/#webgl_kinect).
Uses prerecorded videoclips with color encoded depth values for the points. Feel free to pm me if you got more questions.
Thanks! I started out writing the hyphae as voxels that would check to see if their neighbors were already visible or not, kind of like Conway's Game of Life.
Then I realized that I could keep track of the paths for each player as lists of 3D points, and all I needed to do to check for collisions was check the two lists to see if they contained the next point for growth. This model happened to work really well with Three.JS's TubeGeometry. The hyphae are also constrained to the soil cube as far as where new points can be generated.
If your client has the Pro version of Sketchup, they can export the model as an FBX file.
Three.js has a converter tool for FBX into its own format.
You can then use Three.js to render it. Viewport manipulation is something you'll have to consider yourself though.
The challenge will be trying to get the benefit of all the traditional website markup with WebGL which is just about rendering polygons and textures.
If I were doing this, I'd pre-make the website traditionally (with all the styling), then render portions of it into a Canvas to create textures which then get rendered onto surfaces in a 3D environment.
threejs is super helpful for doing high-level 3D things without learning low-level OpenGL ES calls.
Tracking interaction events (clicks and such) would still be tricky but doable (you'll need to convert camera space interactions into world space and associate it back to the original markup).
Depending how complex of a prototype you have in mind, you could probably have something cool done in six months. :) Could probably save a lot of time by building this with a very specific design/markup layout in mind, rather than a fully generic solution.
A quick glance shows they are using three.js, a WebGL javascript library. WebGL is the web version of OpenGL, which is how you get your computer's GPU to draw 3D graphics.
Let it be known that nearly all modern websites are built with HTML5. HTML5 really just refers to the HTML markup, so the <canvas></canvas>
element is the only thing HTML5 about the graphics. The rest is javascript. People do say HTML5 to differentiate highly interactive sites from their older, Flash counterparts, and it's true that javascript has had a lot of improvements since before HTML5 (so HTML5 could be a blanket term for HTML5, CSS3, and ECMAscript 5), but I just wanted to clarify on this because it can be confusing.
Tip: Use the extension Wappalyzer to see a website's stack, including plugins. It's quite accurate.
Looks like 3 major functions... 2 function that generate the two scrolling layers of goopy noise, and then a deterministic random number generator which works by stripping digits off of sin and cos, used to make the stars.
The iChannel0 contains the frequency spectra of the audio.. you'll have to use some webaudio hack to get that data... but other than that, it doesn't look like there are any other inputs, so porting should be pretty straightforward... you can start with the THREE.js EffectsComposer examples and build you thing within the effects framework, since it does the fullscreen quad setup for you already...
http://wayou.github.io/3D_Audio_Spectrum_VIsualizer/
http://threejs.org/examples/#webgl_postprocessing
example postprocessing fullscreen quad type effect: http://threejs.org/examples/js/shaders/DotScreenShader.js
Basically youre going to paste the contents of the shader into your three.js fragment shader...
edit: NVM got curious so i did it for you... paste the child comments contents into the HTML pane of the Codepen you linked. I hardcoded the values for "frequency[4]" because those would come from a texture containing frequency values for the audio source and I didnt want to implement that.. but it should be doable with some of the links I provided above...
Yes, WebGL is the way to go. But WebGL-stuff created in Unity still requires the Unity player. If you don't want that, you should look into Three.js. WebGL is very taxing on the GPU, so be careful about your level of detail and world size. Here's some examples of using heightmap in Three.js to create a terrain and using a top-down view. Pretty cool!
To create a top-down view, simply place the camera at the height you want (if you're using perspective projection), aim it towards your map and transform it accordingly when the player drags the mouse or whatever.
I can't give out our specific program, but I can tell you how to make your own. Basically modify this STL loader example (or any other 3D file loader example) to get the scene the way you want it:
http://threejs.org/examples/#webgl_loader_stl
and then add this GIF encoder package to output GIFs:
WebGL is for 3D, though. It's very complex and low level, since it's almost a direct port of OpenGL. If you wanted 3D, you'd be better off using a library that provides a level of abstraction, eg, three.js.
The canvas on its own can work fine for 2D.
Brekel Pointcloud2 (http://brekel.com/brekel-pro-pointcloud-v2/)
There is also a version for Kinect1.
I am myself looking for more Kinect2 depth recording software.
There is also Q3D, see q3d.quaternionsoftware.com
And then there is http://threejs.org/examples/#webgl_kinect This uses a videotexture for the pointcloud (simple height field rendering).
PM me for more info or questions. For Mac OS X, there is also dotswarm.nz, albeit not for recording if I get this right, just for animating already recorded pointcloud objects.
As a learning project and just for fun's sake, I have recently made a 3D-model of my new house that you can walk around in on any mobile WebGL-compatible device (using three.js), including modern iPads and android devices. It's really broken, it relies on touch controls so you can't control it from a PC at the moment and the code quality is really low - there is a lot of dead/unused/trial-and-error code, and all the dimensions are hardcoded, for example. There is also no collision detection, so you can walk straight through walls. Upper floors and interior (other than some interior walls) are not available either. It may, however, be a starting point for a project like this, since it kind of matches the requirement of being able to look at without special software or tools.
If you want to take a look, I uploaded it to GitHub. Just put it on a local web server and navigate there with your mobile device (or your PC if you want to, but you won't be able to move around) and you can see for yourself.
Edit: picture: http://i.imgur.com/iNLK5fs.jpg
Yeah somebody did that with one of Escher's drawings of him holding a reflective sphere that was basically a projection of the room around him. It was on the internet like a day ago, don't have the link handy. But yeah I think a similar process could be done with the Huygens footage.
edit: here it is: http://threejs.org/examples/webgl_materials_cubemap_escher.html
you picked a tough one to start with. this one is building up the image as bits with math. as such this is a hard one to tweak from a art perspective.
take a look at threejs.org for a higher level graphics api.
Start them off with some 2D canvas demos (as others suggest) and then hit them with WebGL. Kids are so used to ridiculous standards of 3D from movies, console games and even commercials that 2D canvas stuff can seem like Kindergarten to them.
Here are some of my favorite examples from Mr. Doob, creator of three.js:
But, the kicker is show them how you can modify something (size, color, anything) by changing a line of code and emphasize how they have everything they need to play with this kind of stuff already (just a text editor, like Notepad, and a browser, nothing else.)
I think that's key to turning graphics "consumers" into "makers", by showing them that the tools are right there in reach.
I've had great experiences with others on campus so I won't really worry about people not being open enough or un-cooperative. I, personally, would be excited to teach others about things that interest me(Mathematics, MachineLearning, Algo-trading). But, as a general rule in life, you shouldn't always depend on others being there to teach you.
The group is just a collection of individuals working on things. I can't really set the agenda. Whether or not they're friendly or open about teaching each other is entirely up to the individual.
The group might be most fitting if you endeavour to learn this stuff on your own in the presence of others that are pursuing their interests as seriously as you are. That sort of influence can be the sort of thing that bonds self motivated people.
I have some experience with lib-gdx on android. As well as three.js. So if you want to mess around with that stuff. I have a project in mind and might be available to collaborate on the UI.
I like using code to make my art. As an engineer, I feel more in control and confident of what I'm producing when it starts in a code-like textual format that I can tweak to my heart's content. So I'll often end up using procedurally generated art.
For 2D stuff, it's generally vector art. If I'm doing an HTML/JavaScript prototype, I'll often just use the drawing primitives on the HTML canvas, or use the canvas wrapper EaselJS to help out.
For 3D stuff, I might build the triangle mesh from scratch, and depend more on simple vertex colors than on textures and messy uv coordinates. I recently did some procedural planet generation where I used three.js to build the triangle mesh, for example.
Even when I need to create 2D images and load those up in my game, I sometimes go to POV-Ray to design 3D objects using its textual code-like language, and then just save the raytraced rendering of it as the final image.
as the author of said gif, let me say: well done and very cool! (and frankly more interesting & harder than my gif!)
You might have fun doing this in three.js as well. There are a lot of shiny web toys out there that are three.js + dat.gui.js. Plus webgl is increasingly on mobile now!
And don't forget you can approximate ortho() by using a small FOV and moving the camera back (or you can just set the camera matrices if you have access to those)
Yeap. And I've been a Java developer for 4+ years (SCJD and SCJP, if you still recognise those certification terms). C++ is preferred for speed, it's getting cleaner and better with C++11 and all sorts of libraries out there (which enable things like lambda functions, parallel_for ,etc), some people still craft their own assembly codes in time-critical codes.
There's even a movement right now, where people actually shunning C++, and going back to pure C.
Edit: forgot to say that three.js is 3D library on web browser (WebGL). http://threejs.org
No soy el autor, pero por lo que leí (en un post de facebook) usaron un quadcopter (un "drone") con una cámara gopro para filmar o sacar fotos alrededor de ese lugar. Luego con ayuda de Photosynth transformaron las diferentes capturas en una serie de puntos en tres dimensiones y finalmente se usan esos puntos para armar el mapa tridimensional que se ve en el link (en este caso usando three.js).
En los ejemplos de Photosynth (como este) podés cambiar el modo de vista para que muestre los puntos apretando la tecla P
y rotar el modelo con el mouse.
> but im too much of a gamer to use it for everyday stuff
The situation is getting a lot better with Steam supporting Linux, Civilization being released on Linux (Steam) and indie games providing Linux support. I use Linux all day every day, and while I don't have a lot of time to play games, there are a lot to choose from these days.
> also developing games on linux (which is kinda what i wanna do in my life) gives me the shivers(how can people like opengl??)
As for developing games on Linux, I far prefer developing on Linux than Windows. Visual Studio is nice, but I miss my nice command-line utilities (and I use ViM, and I can't stand the vim plugin for VS). I have never done DirectX/XNA development, but one major benefit of OpenGL is that it works everywhere (Windows, Mac, Linux, iOS, Android, etc), whereas DirectX is Windows only AFAIK.
There are lots of OpenGL tutorials/libraries around and it's quite a sane library for dealing with 3D. Also, OpenGL ES 2 is nearly identical to WebGL, so if you wanted to ship on the web you can get started very easily.
In fact, I'm developing a game right now and prototyping on WebGL because it's much more productive than recompiling every time I make a change. If you haven't already, I'd highly recommend taking a look (look at three.js if you haven't already) to see what the browser is capable of.
It's just a bunch of JavaScript that renders into a <canvas>
element. Here's an article on the site -- it uses a custom rendering engine, but if you wanted to do something similar you could use an existing engine, like three.js, phoria.js, etc.
Right off the bat, its unclear what this idea is.
It does, however, inspire a lot of great thoughts about where you may want to go with this thing. It sounds like an awesome way of switching between social media websites with an interesting user Interface and a lot more.
You asked for ideas so here they are:
Make it a framework for organizing your login credentials as well as website content/the internet.
Instead of using flash, you may want to try your hand with javascript. WebGL may be the future of graphics and animation on the internet.
I think you should display at most 8 out-degree relations to each node.
I've had great results when consulting a designer about layouts before I try to make them myself. You want to get the Node shape, the line thickness and the color choice just right. As well as work out the core mechanics to an extreme accuracy. My projects start with very precise screen layouts created through collaboration with a designer. Only afterwards did I start thinking about making it happen in code. This way the code is dependent your vision, not the other way around.
Omniste.com is allready taken.
Consider finding something with a different domain.
If you do put this on github, make sure that you create a "demo website" that is connected to your node. If you want others to use this, then create an example website that is integrated with this framework.
All in all, cool idea. Let me know when you start making progress!
Yeah, if you don't know javascript there are a lot of people who will tell you you are not a frontend developer but merely a designer. No offense, but I tend to agree, especially in the Valley. You should definitely be investing a lot of your time in learning javascript, pronto.
The good news is that javascript is not too hard, and there are a lot of things to help you learn it. Search this sub and you'll find lots of references to the best books and courses. Since you know so much about the DOM already, digging in on jQuery and learning more of the advanced features would probably be beneficial (there are some courses on this, like Codeschool's). You don't want to be a jQuery-only "javascript" developer, but it will feel the most familiar to you based on your background so I think it makes a reasonable starting place. You'll get a huge return for just a weekend's worth of time spent learning how to wield jQuery properly.
Learning vanilla JS can be a great deal of fun if you let it. My preference has been to organize my learning around games because they basically force you to learn object-oriented javascript. Start with the super simple (Tic Tac Toe) and build up towards more complex (Checkers, Chess, Asteroids, whatever). Game UIs can be driven by jQuery, canvas, or other graphical libraries used in conjunction with a framework like Backbone to manage game state.
If you don't like games but are still graphically inclined, maybe checkout the galleries on paper.js, d3.js, and three.js and start tinkering with their examples. The bottom line is to make your learning about play and you'll be more motivated to keep going instead of constantly feeling like you're being bogged down in boring crap.
Good luck!
I was skimming the examples for threeJS and ran across a molecule simulator, here: http://threejs.org/examples/#css3d_molecules I realized that there is not much difference between that and a bunch of hyper-link connected stars. I also took this as inspiration: http://workshop.chromeexperiments.com/stars/
I personally don't like CoffeeScript, it adds a layer of complexity over your normal JavaScript with some benefits with the syntax.
You should check out three.js, it's an awesome webgl library.
The Wolfram Alpha API doesn't give the points for a plot, only an image. You'll have to evaluate them yourself.
You should also check MathBox.
Good luck! :)
One possibility which I see as being possibly the easiest by far without having to mess with modeling anything. A + B. Combine these two examples, roughly speaking.
A. http://threejs.org/examples/#css3d_molecules B. http://threejs.org/examples/#webgl_effects_oculusrift
And you have the 3d molecules supported, which could be extended as supports loading pdb files (http://en.wikipedia.org/wiki/Protein_Data_Bank_(file_format)). Could easily toggle between a stereo rift view and normal view.
There's a project called Oculus Bridge done by a Portland design agency called Instrument. It's more along the lines of using three.js (JavaScript), Cinder (C++), WebGL via web browsers that support websockets.
Code: https://github.com/Instrument/oculus-bridge
Announcement: http://weareinstrument.com/labs/all/oculus-bridge
There's a oculus effect listed on the three.js site here: http://threejs.org/examples/#webgl_effects_oculusrift
My suspicion by working with HTML5 apps is that there may be more latency by not having a native app. But I'm willing to be proven wrong by anyone who has actually working with the Rift on browser-based content.
What do you mean by browsing the web?
For example, Flash is very well known for being power hungry. So if you're watching movies or listening to music (there still exists music streaming services that use flash player for music, even though the interface is HTML), you might notice that the remaining battery will drop. Also, even CSS and JS can be power hungry if the website is not optimised (try browsing http://threejs.org/).
The current open-source 'bleeding edge' if you will is THREE.JS, node.js and Coffee-script, although I would not recommend it to a newbie. It is quite powerful though, especially if you are very comfortable with JS and fairly low level graphics programming.
Hi! Thank you, its a website built with a 3D javascript library http://threejs.org. Its a GLSL shader with a picture of a pretty rainbow oilslick jpeg behind it. I got some models off of turbosquid.com and collaged them and also added some floating lights.
I personally use BabylonJS and can only recommend it, for 3D or mixed 2D/3D anyways. It's likely that Phaser and similar specialized 2D frameworks have better support for advanced 2D, though.
BJS has a lot of things going for it that I found other frameworks were lacking in:
PS: I'm not affiliated to them in any way, and in fact I used Three.js previously but found it far less enjoyable and productive to work with.
pyramid is a cone geometry with 4 sides. So after that ............ Have you ever created a website before and do you know OOP? If you can read OOP ,you can look at this
> as it turns out you can't use actual instancing via BufferedGeometry.
This example uses something called THREE.InstancedBufferGeometry() but it doesn't appear to be documented.
Sorry I overlooked that sometimes a vertices index is used in a buffergeometry.
This page gives an overview of buffergeometry in general. The use of an index is described on that page (near the bottom).
It appears the boxbuffergeometry does use an index. So for example in a simple box (e.g. a cube) there will need to be 6 vertices per face and 6 faces => 6 * 6 = 36 vertices. But the geometry data structure (in this particular case) reduces the amount of vertex information by using an index. In this case it defines position data (x,y,z coordinates for each vertex) for 24 vertices making 3 * 24 = 72 coordinate values. The index then maps one of the 24 (defined) vertices to the 36 (required) vertices. Of course it could be done even more efficiently because you only need to define the positions of 8 vertices for a box(cuboid). I do not know why 24 are defined in this case but it presumably allows code simplification or improved performance elsewhere.
If you want to get a good understanding of a particular buffergeometry you can inspect the geometry data structure in the debugger. Here is a jsfiddle which you can use - in the debugger put a breakpoint at line 129 and then float the cursor over the word "mesh" and a popup will appear which lets you drill down into the relevant data structure - in this case geometry/attributes/position/array lists the 72 individual coordinate values.
Here is the relevant documentation in case you have not seen it.
In any BUFFER geometry THREE.js assumes that all faces have three vertices and that vertices 0,1,2, map to face 0, and vertices 3,4,5 map to face 1 and so on. Here vertex 0 is the first vertex in the array of vertices.
This pattern helps to speed up certain webgl operations because there is no need to look up vertex numbers for each face. However it also means that vertices are not shared between faces so more vertices are needed for the same number of faces. It also means that you cannot easilly find out which vertices belong to a particular face.
I think you're on the right track. Having a "webGL version" just means you'll use the webGL renderer in threeJS. If its lowpoly and simple lighting, you might have the option to use the simpler canvasrenderer
I assume you can get the low-poly models out into something that threeJS can load too. This OBJ loader example does just that
You'll notice this one has some camera control built in. In the source, the code manually moves the camera around based on mouse position in each render loop. Libraries such as trackball.js can extend on this
Those would be the basic components to a scene with a model in it that can look around.
What you're really asking here is "how do I use three.js?" as there's so much in this question but I will answer in the best way I can! :) Have a look at MrDoob's three.js examples on the website. You can have Pointerlock controls (from the Pointerlock example), where the pointer is locked to the window (unless esc is pressed to unlock) which gives you FPS controls like in a PC game - http://threejs.org/examples/?q=misc_cont#misc_controls_pointerlock Then you'll need to either use the .json blender export plugin or import your object into the Three.js editor and file>export as object: http://threejs.org/editor/
You will then need to put your texture file into a directory in the same way you would if creating a standard HTML file. Then you'll need code like:
var jsonLoader = new THREE.JSONLoader();
var texture= new THREE.TextureLoader().load( "texturefolder/texturename.jpg" ); jsonLoader.load( 'models/modelname.json', function( geometry, materials ) { levelmodel = new THREE.Mesh(geometry, new THREE.MeshBasicMaterial({map: texture})); scene.add(levelmodel ); }, null);
So drag the misc_controls_pointerlock.html (and three.js and PointerLockControls) into a folder (remembering to change the file paths in the html - the bit that says "<script src="three.js">" etc) along with your model and texture, and then import the model+texture via copy-pasting the code I have given you (obviously changing the file names but only the ones in speech marks)accordingly. You will have to work out collision yourself as I'm a bit rubbish with that, but a quick look at the code will show you how to do it!
Unfortunately not, but it may still be good to use if you do go with using <video>
for your animations since you are able to use multiple sources: https://developer.mozilla.org/en/docs/Web/HTML/Element/video#Multiple_Sources_Example
You can do some very complex animations with SVG on its own or you could use libraries such as three.js (but three.js seems like it would be overkill for what you described) - you can see a bunch of examples (CSS, SVG, canvas, three.js/webGL) on this excellent resource (you can interact with some of the slides too).
No, I am not interested in UI. I just want to define the path, that's it. In this lathe example the path is defined within a for loop with vector2 function.
I tried to add vectors tip to tip by using .add method of vector2 in order to realize the first image. (Not the identical one but a similar one)
var dots = []
var a = new THREE.Vector2(10,0); a.add(new THREE.Vector2(0,5)) a.add(new THREE.Vector2(-5,0)) a.add(new THREE.Vector2(0,5)) a.add(new THREE.Vector2(2,0)) a.add(new THREE.Vector2(0,5))
dots.push(a)
That approach did not work
If you already have some kind of pipeline for displaying 3-dimensional objects (e.g. triangular meshes) you can just multiply every vertex by the appropriate matrix and re-display the object.
Maybe you could use http://threejs.org ?
Thank you so much for your help! It looks like the library that threejs used in their examples makes the gpgpu stuff a bit easier. I will try messing with that tomorrow.
Anyway I'm getting a little bit off topic, but just for anyone reading, I had an idea to create two heightfield meshes per chunk: One that would be passed to PhysiJS, and the other for actually rendering. The one being passed to PhysiJS has less vertices. The other is a THREE.PlaneBufferGeometry, with lots of vertices for detail. The CPU generates the noise that creates the general, broad shape of the terrain and sets the vertices of the meshes accordingly (or maybe even use the gpgpu stuff for this noise).
This seems to be the best approach, as then I can create a vertex shader that adds high frequency noise for the mesh that needs to be rendered (to create a rough look). Additionally, the physics engine doesn't have to calculate for all those tiny vertices and waste a lot of computational time.
Babylon.JS has WebGL support. But so does Three.js: http://threejs.org/examples/
Urho is not bad actually. The added support for Angel-Script is great. If you just want C++ with OpenGL backent try gameplay maybe: http://gameplay3d.io
However, if you're not familiar with C / C++ all of these are going to be too much. Learn C++. Learn how to make an X11 window, how to hook that up to OpenGL, how to load models, meshes, textures. The learning curve of an engine is steep, but learning C++ is just as steep.
I used Threejs with a mix of GLSL shaders, SSAO, MSAA, RGB shift + Tiltshift. You can see a lot of Threejs demos on their website, it's all opensource: http://threejs.org/examples/#webgl_postprocessing_ssao
Hey thanks for the reply.
My logic engine is custom but my renderer is threejs.
The world is 3D, the only 2D elements are the sprites and UI, but the sprites are Sprite materials rendered into a mesh.
I guess what I'm trying to do is figure out a way for my engine/framework (it's kind of half custom half open source) to be able to handle this stuff.
What you've said has helped me understand how it might be achieved though so really, thanks a lot for that!
A threejs spotlight inherits from Object3d. Do you know how inheritance works? Basically you can use the functions of an object3d on a spotlight. All object3d objects, have the functions rotate, position, etc. "lookat" might be a useful choice for you.
Thanks!
I'd personally just use threejs, but then I am a programmer, so I just like not relying on large 3rd party applications like unity to do stuff, which I may find fairly simple.
Take a look at this exampel, in threejs. Click on "view source" down the bottom right. That code is nearly all you'd need. If you look at lines 64-70, thats all you need to do to add some lights. 100-120, thats all you need to load a model. There's a few more lines to start the render, move the camera around etc. You could create all the GUI in HTML and lay it over the top. Personally I don't think that's much harder than needing to learn all about Unity.
Whichever you choose, if you need any further help or advice do get in touch, I love making things like this!
Doesn't look like Sketchfab use this, but a popular 3D graphics tool is THREE.js.
You could definitely build the rest of the site in WordPress, and develop your own plugin to support 3D (unless you can find a plugin which does this for you already)...
It won't be easy, but it can be done.