In a CPU core, various types of calculations are actually done in physically different parts of the CPU. For example, integer arithmetic runs on a different part of the silicon than floating point (decimal) arithmetic. A CPU with multiple threads takes advantage of this by letting two programs run on a single CPU core at the same time as long as both programs don't need to do the same type of instruction at the same time. This is why running two threads on a single core isn't usually a doubling of performance, because if both programs are trying to do integer arithmetic at the same time for example, they need to wait for each other.
To make it more ELI5, imagine a CPU core is a kitchen. A program that wants to run is a recipe that needs to be prepared. The various parts of a CPU core are the various areas of the kitchen. You only have 1 oven, 1 stove, 1 cutting board, etc.
Adding additional cores is like adding an entire extra kitchen. Adding CPU threads to a core is like adding another chef to the kitchen. Sure, you can now cook multiple dishes in that kitchen now, but only as long as they don't both want the oven at the same time.
EDIT: As an anecdote, I used to do a lot of work with POV-Ray, a free open-source 3D rendering app. In my 4-core (8 thread) i7-3770, if I tell POV-Ray to only run with 4 threads so it only uses 1 thread in each core, then run a full 8 threads, it only gains about a 10% speed increase because the extreme majority of POV-Ray's work is floating point math, so it can't effectively take advantage of a multi-threaded CPU core.
This might be unusual, but I'm real interested in the performance difference in this ray-tracing software POV-Ray
POV-Ray is very CPU (and RAM) intensive. The free download comes with various pre-built scenes to render, but I can send you some even more intensive code to test with too.
The thing is, GPUs have been very well optimized specifically for drawing flat triangles. They're versatile enough that you probably could command them to draw a genuinely curved surface, but they would do it much less efficiently, so you'd get a huge drop in performance from the same hardware. In order to render the game at a playable framerate, you'd have to make enough concessions in graphics detail that it would just end up looking shittier anyway.
There are a number of 3D art programs, such as POV-Ray, that will gladly give you a 'true' curved surface. However, they run on the CPU and use different algorithms, and do not render nearly fast enough for typical gaming purposes.
I was ray-tracing with POVRay on a 386 back when having a FPU was a feature, not a given. I have been waiting for real-time ray tracing ever since. It would sometimes take days to get a single frame, but the quality was unreal.
Eh, maybe. First, I might have chose my hobby, but I didn't choose it to make me employable, I chose it because I found it fun. What if I'd found something less marketable fun, like collecting butterflies, or writing fan-fiction (leaving aside outliers like the author of 50 shades of grey)? That the particular hobby I chose lets me earn a living is a happy accident.
My hobby is computer programming. My parents had a computer in the house (this was before ubiquitous PCs, smartphones, and tablets, when PCs were much more expensive than today) and got me books on programming. My mom is a programmer herself. I didn't really like programming at first; my parents hoped I would, and tried to nudge me for years until they eventually found the right "gateway drug": POV-Ray.
So yeah, I chose it, but that doesn't mean my choice was the result of pure, noble, individual effort. I got lucky with who my parents were, and how much they encouraged me. If we didn't have a computer in the house, or if my parents were working 3 jobs cleaning offices and weren't around to encourage me, things might have turned out the same, but it's likely they would have been very different.
I'm here in TiA, and naturally I think SJW's preoccupation with "privilege" is bullshit. But that doesn't mean we have to go whole-hog in the other direction. Plenty of people inherit advantages that they did not earn; I'm one of them.
Raytracing is a very, very old technique. For example, the first version of what's now known as POV-Ray came out in 1991.
It's computationally expensive, however, so what's recent is figuring out ways to do it in real-time rather than in massively expensive offline jobs.
I don't disagree that his videos look nice, but, and this will always be the case, polygons will win. I'll try to explain this quickly: Voxels are neat. They are accurate, but they are only as detailed as their resolution. When any form of additional detail(reflections, subsurface scattering, radiosity, realtime lighting, etc.) has to be added, the process becomes a raytrace(slow - see POV Ray). Polygons are limited in number, but their content is interpolated and splined to create the appearance of a smooth surface. There are thousands of optimizations and shortcuts available for polygon rendering that improve it's render time. It's also extremely simple to limit detail based on distance, something that's required to add any kind of effect(reflections, subsurface scattering, radiosity, realtime lighting, etc.). These features can never be implemented on a voxel system. When computers become powerful enough to run a complex voxel system, then a new generation of polygon engines will arise that will still be more efficient, cost effective, and flexible than a voxel system. Polygon and shader based rendering are the best things that happened to the raster process.
Ok what precisely is the fraction then?
Edit: fine, here is the proof, corrected for shot angle.
If Sultan is 98" tall, short guy is 70" tall in this frame of reference or ~5'10". So yeah I guess I was within 5%.
Hence, the optical illusion.
Pov-Ray, The Persistence of Vision Ray Tracer was used to do this rendering. Ultra Fractal was used to generate the Mandelbrot image used in Pov-Ray to create the scenery. The Quaternion Julia is a function built-in to Pov-Ray.
Lego Digital Designer to create the model
LDD to POV-Ray to convert the geometry (I used a lot of customization on the .pov file after the fact though)
POV-Ray to render the scene
It probably took days to render on those old Pentiums. Raytracing can create stunning scenes, though. POV-Ray is the software used to generate those.
For my images, I use POV-Ray. It's a raytracer which has been around since 1991 or so. Everything in the scene is described using a text-based language (similar to the C programming language in syntax), from which POV-Ray generates an image.
To start out, there's an overview of the software here. There are also a large number of tutorials and examples by Friedrich Lohmueller.
I'm not particularly qualified to determine if this is a suitable response, but when showing OpenSCAD to a user in our makerspace, he told me that he sees similarities between OpenSCAD and POV-ray and I believe the latter is bitmap based. The website, in my opinion, is challenging to navigate, hence my hesitation to suggest this is a solid answer, but it may be worth pursuing.
After watching the first tutorial I found, I can see why the comment of similarity was made. I currently don't have the need for this utility, but I might also enjoy to learn it for the future.
Sure, that's what all the software does, because normal maps are made to be so. I guess there is no software for that because the use case is rather out of scope for games.
But in principle, u/Nexustar's idea would be rather easy to implement (well, for some relative definition of "easy", as usual).
Back in the days (decades ago) I used to create grayscale Mandelbrot pictures, and import them into the POVRay raytracer; it had a function where you could load an image and transform it not into a normal map (where it's all just fake), but into an actual 3D object. Oh, it's actually still there: Height Maps. http://www.povray.org/documentation/3.7.0/r3_4.html#r3_4_5_1_5
I have not kept up with 3D graphics, but such height maps could surely be made in the general case of a model, by not restricting them to a 1x1x1 cube, but being able to map them onto a triangle. One would have plenty of fun figuring out how to scale everything to be just right, but I see no theoretical or practical hinderances.
Similar to how you can have threads in Fusion360 either as simple image/texture, or "modeled".
Imagine a horizontally aligned circle on an imaginary X and Z axis.
Now put the circle in a baseball field.
Now imagine the pitcher on the very edge of the circle at 0 degrees, and the batter is on the opposite end of the circle at 180 degrees.
The pitcher is a righty, and the batter is a righty in this example. The right-handed pitcher will probably release the ball at the edge of the circle around 358 degrees and if the ball was thrown a perfect strike as a fastball, it would cross the plate around 178-179 degrees.
From the batters POV @ 180 degrees, by the time the fastball falls in-line with the pitcher [at 0 degrees] and the batters eyes [at 180 degrees], the batter's eyes are no longer able to track the ball and the batter has already made the decision whether or not they were gonna swing.
So, It doesn't really matter what color uniform they wear. It is however illegal for them to where white/grey wrist band or white/grey undershirt (if the undershirt goes past the uniforms arms).
That is the best answer as I can think of. It makes sense to me, hopefully to others as well.
Yeah it's free. You can check out some of my posts to see what the renders it creates look like.
You actually need two programs, one is called LDD2PovRay and it converts your .lxf file to a .pov file and changes the render settings - basically, you pop your file in this and change the settings and hit "convert". You'll also need Pov-Ray itself. Once you hit convert in LDD2PovRay it'll automatically open up Pov-Ray and start rendering the file.
On my computer the highest-quality renders can take anywhere from a few minutes to an entire day, it depends on the size of the model and especially the number of transparent parts. More than like five transparent parts in the model will exponentially increase the rendering time. This robot should be fine since the parts are small, but I'd be careful about adding more. Also, transparent fluorescent colors don't work for some reason.
Call me old fashioned but I really like Olex, and POVRay. I only use Olex cause it's what I was trained on when I used to analyse single crystal diffraction in undergraduate, but POVRay is great (albeit old) for just about any cartoon-ish image you might need to make for publication.
IIRC, my workflow used to be Olex --> export as .cif --> POVChem --> POVRay
Learn a computer graphics package. I use POV-ray and Blender. You may want to check out some of the modelers or sculptors they recommend as it will decrease the time you spend making a geometry.
The image is from 2004. It's not state-of-the-art by current or even 2004 standards, but the original author released it and the corresponding POV-Ray code in the public domain which means that you can install POV-Ray and render the image yourself.
I've done a bit of parallel in the past... specifically with Persistence of Vision raytracer. That kind of computation is PERFECT to parallelize. It was especially neat to see it visually too.. rendering small chunks here and there all over the screen instead of just line by line in order.
I've also done parallel compilation. It isn't perfect, at least not with the huge huge projects I work with... so we tend not to use it, breaks too much.
oh and the parallel i'm talking about is more... distributed than just threads on a single computer. I used 10 computers to render my scene - this was before processors had more than 1 core.
The threads on the same computer I use all the time for compilation - Gentoo supports it naturally... as does make. so next time you compile a kernel in linux, just go make -j9 to spawn your threads. As a sidenote, -j8 in this example should be -j9. It's recommended to use n+1 cores. If your comp has 4 cores, hypterthreading, the number's 9.
Download pov ray and play with it for a while. Learn what it takes to create a very simple scene with just a couple of objects. Now think about how much effort it was, and how much more effort it would take to create the visually rich environments and objects needed for movies and TV shows.
> So e.g. when you place two disks adjacent to each other so that they form a "blob", i.e. "group", the border of this blob has sharp features.
This is pretty similar to 2D metaballs.
POV-Ray's blob function is a variant of these that lets you tweak the strength of each point:
> density = strength * ( 1 - (distance/radius)^2 )^2
> where distance is the distance of a given point from the spherical blob's center or cylinder blob's axis. This formula has the nice property that it is exactly equal to the strength parameter at the center of the component and drops off to exactly 0 at a distance from the center of the component that is equal to the radius value. The density formula for more than one blob component is just the sum of the individual component densities.
Then color a pixel based on which player has a higher density at that point. You need the radii of "stones" to overlap, though, in order for them to interact. It seems like it would be natural to enforce some threshold or minimum distance between center points, or at least to prevent you from playing directly or almost directly on top of your opponent's pieces.
Nice! I used to play around with POVRay, this reminded me of one of the default images. It didn't have the faces, though. It's probably been a decade since I've messed with it, this makes me want to pick it up again.
I hope mindbleach doesn't mind if I take over.
I'll use this diagram to illustrate. To be able to show an image of a scene that has been constructed in 3D on the computer, you need to create a view plane (think of it as a window into the world). The perspective of this view plane is governed by the location of the "camera", or the point on the left side of the diagram (labeled as "location"). By changing the field of view ("angle"), the direction of the camera, or the camera's location in 3D space, you can change what you're looking at in the 3D world. It's basically a way to change stored 3D objects into a visual image.
Other people in the thread have suggested math competitions, which I agree are worth looking into. Of course, not everybody who likes and is good at math enjoys those, but they're worth trying. Me, I always preferred to pick my own problems to play around with, and I wasn't fond of the time pressure involved in solving puzzles on the clock. I think that's a "you don't know until you try" kind of thing, honestly.
I picked up computer programming fairly young (in around the 5th grade), by reading books and trying things with a couple friends who were also into it. This opened up more possibilities for math-filled activities that I could pursue just for the fun of it: exploring chaos theory via the logistic map, simulating predator/prey ecosystems, etc. Then my friends and I would take our programs to a competition (the Alabama Council for Technology in Education sponsored them) and we'd win prizes that would eventually look good on our college applications. The point was the fun of doing things together; the auxiliary benefit was being able to get an official recognition of some kind that would aid in that grand tradition of, upon graduation, getting as far away from high school as possible.
Other extracurriculars may have a mathematics component. Probably not debate team, yearbook or the literary magazine (though I did make some POV-Ray art to fill empty spaces in the latter). There are Science Olympiads, of course, and "Academic Team"/"Quiz Bowl" competitions typically devote some percentage of their questions to mathematics.
Same. Amazingly, it still exists (and looks like they haven't updated the website since 1993.)
Blender is a much better solution these days.
​
So something i did to achieve that is using rhino’s incredible exporting capabilities to export the scene i wanted to render as a .pov. That file let’s you put the scene into pov ray (persistence of vision ray tracer), a very old school text based renderer. The initial release came out in 1991, and tons of early breakthroughs in computer graphics and renders were done in it. this way is definitely not the most efficient, but it’s a way to make renders using the exact same program that was used to make the graphics you’re describing. Definitely check out some of the tutorials on how to use pov-ray, but if you set up your scene well in rhino, most of the work will be done fore you already.
also complete side note but I’m in architecture school and have been obsessed with this style of render for a bit. I haven’t done any actual renders in the style, but i ripped textures from the LSD:dream emulator game files and used them in some final drawings for one of my studio projects.
also come visit /r/vintagecgi to look at amazing early computer graphics.
I grabbed the sources from the pov-ray site and rebuilt them with MinGW 5.10.. basically I removed the video code from the IBM GCC port, and used that.
My i7-9700k at 4.66Ghz cranks it out in 8 seconds.
I still remember having this 286 I was given for free I used to run Pov-Ray on, the big thing was that I had QuickC for Windows so I could make a 'protected mode' version, and it had an 80287XL so it wasn't insanely slow, but it was the only machine I had with a floating point processor.
as always it's amazing how time moves on.
From the creator's website: >Rebel Ship is an animation of a space ship coming down to land at a platform. The platform is safely hidden from evil eyes in a dense evergreen forest. The only witness to this landing is the immense gas giant and the array of stars shimmering in the far distance. > I created the majority of this animation using Autodesk's 3D Studio. Just to clarify, POV-Ray was still used in the making of this animation. Sorry, I just can not create something without POV-Ray being used. Hrmm... In fact, most of the cool looking things were POV-Ray rendered and then composited into the final animation. The plasma jetting out of the space ship's engines, the gas giant, and the distant star field was done with POV-Ray. I primarily used Autodesk's 3D Studio to create this animation as I found that many of the large CG companies want people with Autodesk's 3D Studio experience. So I figured I'd see what all the commotion was about. Having now used Autodesk's 3D Studio, I can see why many companies would choose to create their graphics and animations with it. Regardless of this fact, I didn't like it enough to be converted. POV-Ray just has too much power.
From the artist's website:
>Rebel Ship is an animation of a space ship coming down to land at a platform. The platform is safely hidden from evil eyes in a dense evergreen forest. The only witness to this landing is the immense gas giant and the array of stars shimmering in the far distance. I created the majority of this animation using Autodesk's 3D Studio. Just to clarify, POV-Ray was still used in the making of this animation. Sorry, I just can not create something without POV-Ray being used. Hrmm... In fact, most of the cool looking things were POV-Ray rendered and then composited into the final animation. The plasma jetting out of the space ship's engines, the gas giant, and the distant star field was done with POV-Ray. I primarily used Autodesk's 3D Studio to create this animation as I found that many of the large CG companies want people with Autodesk's 3D Studio experience. So I figured I'd see what all the commotion was about. Having now used Autodesk's 3D Studio, I can see why many companies would choose to create their graphics and animations with it. Regardless of this fact, I didn't like it enough to be converted. POV-Ray just has too much power.
No problem! Here's some static images with the outside of the torus covered and the same animation but with sky and ground removed: https://imgur.com/a/qHsFVxq And here's the basic pov-ray scene (non-animated) to play around with (grab pov-ray from http://www.povray.org/ and paste the code below into a new file.)
global_settings { max_trace_level 256 }
camera { location <-10, 5, -20> look_at <-2, 0, 1> }
light_source { <1000, 4000, -2000>, rgb 0.8 }
// Ground object { plane { y, -40 } texture { pigment { granite pigment_map { [0.0 rgb <0, 1, 0>] [1.0 rgb <0, 0.5, 0> ] } scale 1000 } } }
// Sky sky_sphere { pigment { color rgb <0.0, 0.5, 0.625> } } object { plane { y, 1000 } texture { pigment { granite pigment_map { [0.5 rgbt <1, 1, 1, 1>] [1.0 rgb <1, 1, 1, 0> ] } scale 5000 } finish { ambient 0.8 } } }
// Half Torus object { torus { 10, 3 sturm } clipped_by { box { <-15, -5, 0>, <15, 5, 15> } } texture { pigment { rgb 1 } finish { reflection { 1.0 } ambient 0 diffuse 0 } } }
// Cube object { box { <9, -1, -1>, <11, 1, 1> } texture { pigment { rgb <1, 0, 0> } } }
The sort of thing you're interested in is totally possible in FreeCAD. Check out these threads on the user showcase section of the FreeCAD forums:
FreeCAD also has a Raytracing Workbench which uses POV-Ray under the hood, so you can definitely make nice renders of your models in an integrated fashion.
If you find bugs in FreeCAD we're interested in fixing them. I'd recommend making an account on the forum and introducing yourself in the help section, you can get high-quality help from veteran users which will make things much easier. It's only going to keep getting better, but it takes a little collaborative effort.
That very well may be true. I am just starting with this hobby, and OpenSCAD reminded me of POV-Ray, which I played around with back in college. That, coupled with a programming-conditioned brain, made OpenSCAD feel a bit more natural than other, more traditional, modeling tools.
I tried playing around with Tinkercad a bit, but I couldn't really figure it out. So I fell back to my default - programming.
But as I said, still learning the ropes, and I might just find another workflow that suits me better.
For anybody interested, you can get the newer version here, including older versions of Povray, as far as I can read, geon used version 3.01, and that is located in the 3.0 version of the folder. Enjoy and learn!
http://www.povray.org/download/ http://www.povray.org/ftp/pub/povray/Old-Versions/
> I legit thought this was a 3d render, it looks that nice.
You know you have been raytracing too long when … You wonder which raytracer God used.
Possibly
"POV-Ray - The Persistence of Vision Raytracer"
http://www.povray.org/
{I have not used it}
&
"YafaRay "free rays for the masses" in Blender , etc"
http://www.yafaray.org/
{I have not used it}
I don't know if it's what you're looking for, but the raytracing program POV-Ray has a system to turn a 2D greyscale map into a 3D heightfield. It's had that feature since at least 20 years ago.
On a Mac?
I don't know if you could find anything. When rendering LDD files, I've had the most success with POV-Ray.
Trouble is, again, it appears that the latest version isn't Mac compatible. You could run an emulator though. If you did that, you'd still need to use LDD to POV-Ray to convert the LDD file into something readable by POV-Ray. Luckily, both pieces of software are free, and if you have an emulator, fairly easy to use.
Good luck.
I'm not sure how I make a makefile for POV-Ray! If you have some steps, I'll do it.
To run the code, you just need to download POV-Ray for free here and then download my .pov file, open it up and hit run.
POV-Ray but you'll have to do some googling to get it working just right. "How to render lego LDD models in POV-ray" or some combination there abouts should provide some helpful links.
Can't really give you much more info than that as it was a while ago and I've since lost most of those files due to a HDD failure.
I couldn't tell you how to implement one, but the theory behind raytracing is that you're working backwards to render a scene as to how it's normally displayed in nature. In the real world, light from a source (the Sun, a lamp, etc.) casts a ray of photons to an object and that ray is then reflected off the object to your eye. Raytracing goes the other way. The color and intensity of each pixel on your screen is determined by tracing a path from that pixel to the object and then to the light source. POV-Ray is a raytracer that's been around for decades. I remember my brother-in-law playing with this on his 386. It would literally take days back then to render a scene with more than a few objects, reflections and light sources. The source code is available so maybe you can get some ideas about how it works by looking at it. All that stuff is way over my head. I just know the ELI5 bit. :)
If you install POVRay, LDD to POV Ray will automatically load the POV file into POVRay and start the render.
So you need to install POVRay http://www.povray.org/ This does the actual rendering
and you need to install LDD to POVRay. This converts the LXF file into a POV file
http://ldd2povray.lddtools.com/
Follow these instructions to configure
This page even says: Last edited 7 years ago [...] This image was selected as picture of the day on the English Wikipedia for August 2, 2006.
I remember playing with POV-Ray back in 2005, but even then it looked like development had completely stopped. From the timestamps on their download page, it looks like POV-Ray 3.7, the first version to offer multi-processor support, was released in 2013, eight years after 3.6.
So even the most recent version of POV-Ray is the IE7 of the CGI world.
>What are some other cool things I should consider doing with this beast?
I remember messing around a bit with POV-ray back in the day. The first thought I had was to wonder how quickly your server could render really complex scenes.
I wouldn't try to explain Perlin noise to a new programmer or someone without a decent understanding of vector math. Beyond those requirements, I don't think it's that difficult to understand, but in my google search just now I found that the online resources are lacking. This seems like a decent starting point; there's an implementation of the algorithm and a link to a presentation with some explanation.
I originally learned about it by reading the POV-Ray source code.
It sure is! Most of the industrial design renders, official car photos, etc are made in a 3D ray-tracing program like CPU-based VRay, Flamingo, and Maxwell; or the GPU-based Octane Render; or the free CPU-based POV-Ray.