SFML does not create a depth buffer by default:
http://www.sfml-dev.org/documentation/2.0/structsf_1_1ContextSettings.php
Change line 29 to:
sf::RenderWindow window(sf::VideoMode(WINDOW_WIDTH, WINDOW_HEIGHT), "OpenGL", sf::Style::Default, sf::ContextSettings(24));
Make sure you're using latest CMake version so that you'll know it has proper Visual Studio 2015 support. Same for GLFW, I just tested it with v3.1.1 source package and it works.
If you don't already have latest CMake version installed, uninstall old version and then install v3.3.1 (at the time of this post). Then extract glfw-3.1.1.zip archive, run CMake GUI, follow the directions in your linked tutorial about choosing source directory followed by picking a build directory, and then when you get to the CMake GUI Configure stage then you'll get a wizard starting off what project type to generator for, in this case you'll want just "Visual Studio 14 2015", keep using "Use default native compilers", hit Finish. The wizard dialog will close and a lot of entries in the listbox will be red. Just press Configure again and then press Generate which should be really fast.
Afterward browse to the glfw-3.1.1\build directory and open the GLFW.sln visual studio solution file. From there you should just build it and it then you can continue on from the tutorial.
Hopefully that works and the problem was that you previously chose the wrong generator during the first Configure CMake GUI stage.
Edit: If you made a mistake on choosing the generator you'll need to delete the CMake cache by going to File menu, selecting Delete Cache, and Yes for cache deletion confirmation prompt. Then press Configure button again to get the project generator wizard dialog again.
>the best way to add or remove things from the buffer
The usual way is to use glMapBufferRange. It returns a pointer to the buffer, which you can modify however you want. You must call glUnmapBuffer before using the buffer again, e.g. drawing with it. This technique requires OpenGL 3 or ES 3.
Recent versions of OpenGL 4 have a new API that lets you keep the buffer mapped while using it, but you have to handle synchronization yourself. If you have to modify the buffer frequently this can be much faster.
>exclude just a few verticies from being drawn in one frame and have them re appear the next frame?
There are a lot of ways to do this. Try looking at all the available drawing functions here and see which one fits your needs best:
For a more modern approach, OpenGL 4.2 includes uniform subroutines. Basically they are function pointers that you switch between at runtime with very little overhead (as opposed to something like if statements).
This allows you to have 1 giant supershader that has all your sub shaders in it.
Also as of OpenGL 3.1 there are shared uniform buffer objects. Basically you make 1 block of uniforms for all your parameters and they can be shared between shaders. So you don't have to do things like upload a matrix a bunch of times.
This allows you to do things like put all your material information into an array in your shaders and just choose which one you want rather than upload the values every time.
But it's no so useful if you need to support older hardware.
Have a central Materials class that lets you choose between which shader you want to use. Ie material.usePhong() or whatever.
Implement some way to generate shader code. There is an ARB #include extension as of OpenGL 3.2. If you don't have that extension it's easy enough to fake. You will also want some basic code generation so you can generate the uniform blocks and share code between shaders.
Yes. You can render a frame, get a copy of the framebuffer, and save it to any image file you would like per camera view. You can already do that in say Blender 3D, among other 3D model applications, if you really just care about getting images of various side views of the model without any coding.
If you indeed want to do it yourself then using OpenGL then to get started there's several resources on the sidebar of this subreddit about using modern OpenGL and how to go about doing several things. You'll want to learn how to create a C/C++ OpenGL project, create a graphics context window, draw simple meshes and viewport handling, load and use textures(images) on said meshes, and capture framebuffer and save it to an image.
I suggest to use Asset Importer library to make it simple to load 3D model files.
According to the spec 1.5 has it, 1.3 does not:
http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.30.10.pdf
http://www.opengl.org/registry/doc/GLSLangSpec.1.50.11.pdf
Why it works on your card, but not his, I have no clue.
Calling glutSwapBuffers() immediately after drawing is not optimal.
When you issue commands to OpenGL, they're effectively just adding onto a queue of commands to issue to the GPU, which begins executing in the background, and returns immediately. A call to glutSwapBuffers() (Or Present(), if you're on DirectX) will cause the program to stop and wait for all those operations to finish. glutSwapBuffers() calls glFinish() internally, which is a function that does not return until OpenGL is done with whatever commands it has queued up.
http://www.opengl.org/sdk/docs/man/xhtml/glFinish.xml
Additionally, any commands that read data back from OpenGL such as glReadPixels() or the result of hardware occlusion queries can cause the program on the CPU to choke while waiting for the GPU.
In order to fully utilize both processors, you should try to not have one waiting on the other all the time. I'd possibly even go as far as calling draw() before your update().
Another option is to just go multi-threaded and have one thread dedicated to feeding the GPU commands.
your question was actually rambling and incoherent, and your response to people trying to figure out what you were asking was rude.
read the extension definition like the rest of us.
https://www.slideshare.net/DevCentralAMD/vertex-shader-tricks-bill-bilodeau, specifically slide 19. I've seen a few other people mention it, but I lost the links to the history.
The multiple blur passes is the fastest way to get a large bloom radius. In step 2 doing the blur with a normal 1000pixel blur even with a split vertical/horizontal blur would take ages. Using down scaled version is quick and does give you some nice bonus. That is that you can adjust the weight on each blur radius/layer. This is what UE4 also supports. I have some short notes in my blog. Basically the papers you want to look at are a GDC paper from 2004 by Masaki Kawase and 'Unreal 4 - The Technology Behind the Elemental Demo'.
Im not an expert on the topic and you probably have already read this: https://www.kernel.org/doc/Documentation/fb/framebuffer.txt
but have you trying reading the pixels from SDL and writing them to /dev/fb1 ? like raw image data
Build Mesa using scons and drop the resulting DLLs next to your executable. They'll override the system-wide OpenGL DLL with Mesa's software rasterizer.
This is how I invoke scons
:
scons -j2 build=release machine=x86 platform=windows opengl32
Adjust -j
to taste/cores.
Thanks to WebGL, I'm definitely picturing an interactive tutorial on OpenGL. A simple search-and-replace algorithm could convert OpenGL statements to WebGL ones (basically just take off the "gl" prefix, lowercase the first letter, and call that function on the WebGL context object). GLSL could be fed as-is to WebGL.
I also think the tutorial should go through matrix math despite the fixed pipeline functions being gone, in which case explaining the effect of the order of operations could be done interactively. Imagine a click-and-drag interface, perhaps powered by jQuery's sortable, with a live 3D rendering on the side.
Or once you work up to rendering a triangle on the screen, imagine some textboxes or sliders beside a live 3D preview, allowing the user to move the vertices of the triangle, and perhaps another button to switch the indices order CW/CCW (or even allow them to drag the indices order as well). Same could be done with color, and once a "camera" is established, with camera position, and so on.
Basically what I'm getting at is, thanks to WebGL, I think a tutorial could be developed that is extremely interactive and browser-based. And everything would be easily ported to C, since JavaScript is so C-like (and the user-visible code could certainly be in C - or perhaps multiple languages with tabs, like MSDN does with C#/VB.NET/J#/VBScript).
this is probably completely wrong, but doesn't texture2Dproj already do the W division step? why are you dividing again in your function?
see http://www.opengl.org/wiki/Sampler_(GLSL)#Projective_texture_access
in any case, if the w divide were wrong i'd expect different sort of errors, so this is probably not it.
You may want to know that only Mac OS 10.7 Lion and above supports OpenGL 3.2, and apparently only the 3.2 Core profile, which you have to request over the still-default-for-context-creation 2.1 Compatibility profile straight from Snow Leopard. I've heard 10.6 Snow Leopard and its OpenGL 2.1 still has a small presence, which is annoying. (I'm not too happy with Apple about all of this as a Mac user, honestly)
Personally, I would use OpenGL 3 anyway, but I'm a biased college student learning OpenGL starting with 3.2 (Internet recommendation to not get used to fixed shaders, plus using Mac 10.8).
It has been my and some of the Internet's experience that Intel graphics before the fairly recent ones sucked anyway though, so that and maybe Mac 10.6 may not be targets worth sticking with OpenGL 2 for. Future compatibility and "latest and greatest" and all that.
OS X has had VAOs as a custom extension for a long time, you don't need 3.2. Use the following:
glGenVertexArraysAPPLE glBindVertexArrayAPPLE glDeleteVertexArraysAPPLE
GLEW doesn't seem to bind these so I just put this at the beginning of files where I use them:
#ifdef APPLE #undef glGenVertexArrays #undef glBindVertexArray #undef glDeleteVertexArrays #define glGenVertexArrays glGenVertexArraysAPPLE #define glBindVertexArray glBindVertexArrayAPPLE #define glDeleteVertexArrays glDeleteVertexArraysAPPLE #endif /* APPLE */
EDIT: Also, none of the places I read about VAOs really explained them well until I found it on the OpenGL Wiki. It has a very in depth explanation of what state is stored, etc. If you're still using fixed function you can even use glVertexPointer glNormalPointer and glColorPointer to bind those attributes from your VBO. In fact I do this most of the time on Macs since modern OpenGL support is still so flaky.
Glew doesn't associate itself with contexts, it just loads OpenGL functions. Just make sure you have a context set as current.
Edit: after doing more research, it seems that function pointers can change with each context and glew init will need to be called every time you set your glew context as current.
Maybe you are wondering about writing directly into the video memory, or what's called framebuffer. You can do that by writing into /dev/fb*, which corresponds to your video device. This topic is off from OpenGL so I suggest you can read more at the Linux document here.
> I currently basically have a vertex array and vertex buffer per shape on the screen
This is the standard way to do it, if you don't need maximum draw call performance.
The optimal way is to have one VAO in total (or one per vertex format). So you bind it once, and never have change it. When coupled with other batching techniques (often described as "AZDO"), you can draw many many objects in a single draw call, which can be much faster for very big scenes.
> float angle = acos(dot(normalize(lightVector),normalize(normalVector))); brightnessMod += 1.0-smoothstep(0.0,PI/2.0,angle);
It looks like you're trying to emulate Lambertian Reflectance. The typical way to do this calculation (based on your current logic) would be:
brightnessMod += max(0.0, dot(normalize(lightVector), normalize(normalVector)));
This graph shows the difference between the two, where t
is the normalized angle; green is your method, orange is mine. The (3t^(2)-2t^(3)) term is the equation for smoothstep
across the interval 0 to 1.
If you're still having problems, verify your input vectors by writing them directly to your output, e.g.
gl_FragColor = vec4(normalize(lightVector) * 0.5 + vec3(0.5), 1.0);
This is a good site, but it lists which API function is covered by each OpenGL standard, which isn't enough because
An OpenGL 3.3 driver may support some OpenGL 4.x functions through extensions (this was the case for Mesa for a long time for example, until it passed to support OpenGL 4 for some backends)
Doesn't list which hardware is compatible with each extension
Doesn't document stuff like "this is buggy with the following driver".
Point 3 is what would be really handy. To make a comparison, when we go to Can I use: flexbox, it lists a number of known issues with Internet Explorer.
This blog post might help: https://www.mapbox.com/blog/drawing-antialiased-lines/
It doesn't give you full code, but does outline the method used to (partially) do the triangulation in the vertex shader. The method is pretty simple, relying on creating duplicate vertices with different normal vectors (plus some other tricks to handle line joins properly).
Have you read through this one also? http://www.glfw.org/docs/latest/build.html
Simplify as much as possible. Get rid of as much as you can until you get something to build and run. Do not include windows.h.
The version of glfw you are using does come with built libraries correct?
> There is no built-in way to load an image file to a GL Texture. The audio is simply writing PCM values to a buffer
For loading textures, use SDL2_image to load from a huge range of file formats into a SDL_Surface, then getting the pixels into a GL texture is usually very straight forward (possibly one format conversion to RGBA, then a glTexSubImage call).
For audio, you can use SDL2_mixer, which isn't quite as low level as SDL2's build in audio abstraction.
Glad to be of assistance. If you decide to not go with Unity due to the price tag (Unity Pro with Android Pro, 1 year student subscription is $199), you could also take a look at Libgdx. It's currently a lot lower-level 3D game framework than Unity, but I have heard a lot of people say good things about it.
I only have GL 3.3 on my system, but I have the debug extensions GL_AMDX_debug_output and GL_AMD_debug_output. In my last project I integrated those instead of using glGetError. First I tried the AMD version GL_AMD_debug_output because of having that extension. Then I renamed the functions to see if the ARB version GL_ARB_debug_output worked and it did!
What did I do:
* setting the debug-flag for OpenGL via SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, SDL_GL_CONTEXT_DEBUG_FLAG); What ever you use, somehow you need to create a debug-context instead of a normal one to use this extension.
* when you have set up your window and OpenGL and and your glew is init etc. just set the function that should be called via glDebugMessageCallbackARB(handleGLDebug, nullptr);
The function looks something like this:
void handleGLDebug( GLenum source, GLenum type, uint id, GLenum severity, GLsizei length, const GLchar* message, void* userParam){ // do some tuff }
After that, whenever some error or warning occurs, the function will be called.
No need for glGetError after every suspicious line of gl-code.
I guess it's not seen often because it is "new" as most tutorials are rather GL2. Might also be a lack of using error-handling. Even in older tutorials glGetError isn't seen that much.
Your code completely lacks any error catching. I think that would be the best place to start. Call glGetError after every GL function.
edit: change URL to non-ES. Also note that GL doesn't fatally error it just shrugs and pretends to be Marvin if you're not keeping an eye on it.
>the glm matrix shoves the translation values into the last row, not the last column
Glm matrices are column-major which means the columns of the matrix are contiguous in memory and elements are accessed with subscript operators in the following fashion: (src)
matrix[col][row]
So the translation values are still placed in the last column. This is a very common misconception: column-major does not mean transposed. Think of it more like the difference between big and little-endian byte ordering.
Your video is very cool! Your implementation is really dynamic, and the visual feedback ( to the user ) is great. I am happy the whitepaper was helpful.
So, in your pseudo code above, the real problem is in the following statement:
visibilityMapTexture[currpos.x, currpos.z].r = isVisible(currpos, height, lineOfSightPoint.pos);
You can't write to a sampler in a pixel shader. As others have noted, you can use the GL_ARB_shader_image_load_store extensions to get read/write access. That would definitely work.
If you want to implement support via FBO ( as illustrated in my example ), be aware that you can do full RGBA sampling. I just sampled stuff into a float for the purposes of my example code. So you can also do stuff like:
fragColor = vec4( float_expression, float_expression, float_expression, float_expression );
and
fragColor = vec4( vec3_expression, float_expression );
You can also use layout qualifiers for more control.
layout(location = 3, index = 1) out vec4 factor;
See: http://www.opengl.org/registry/doc/GLSLangSpec.4.40.pdf ( Search for: 4.4.2 Output Layout Qualifiers )
I think this is applicable to 3.3.
> Does that sound right?
yes. to save all that glVertexAttribPointer() calls ARB created VAO.
glVertexAttribPointer() read current GL_ARRAY_BUFFER binding and store it elsewhere. after you call glVertexAttribPointer() you can even call glBindBuffer(..., 0); that mean bind no buffer and it will still draw correctly. glVertexAttribPointer() set both data format and source of the data. it is just historical reasons that last parameter is used as offset to currently bounded GL_ARRAY_BUFFER.
http://www.opengl.org/sdk/docs/man4/html/glVertexAttribPointer.xhtml http://www.opengl.org/wiki/Vertex_Specification
good suggestion! I would also add some more links:
Modern OpenGL Usage: Using Vertex Buffer Objects Well, by Mark Kilgard,
and (@wiki Buffer Object)[http://www.opengl.org/wiki/Buffer_Object]
That's not true as of OpenGL 4.3.
With GL_SHADER_STORAGE_BUFFER there is more or less no limit.
As everyone has pointed out, CUDA is Nvidia only. OpenCL is poorly supported by Nvidia, and while it can be used, the performance is roughly 1/4 of OpenGL according to my tests, although that result could be a consequence of the OpenGL-OpenCL buffer sharing, however Nvidia's OpenCL also does not support 3D texture writes, preventing me from writing a pure OpenCL renderer and confirming that.
OpenGL 4.3 also adds compute shaders, so there's really nothing OpenCL can do that OpenGL can not (excluding all that "works on things other than GPU's").
IMHO, the benefit of using CUDA or OpenCL really comes about when you are using more than one GPU.
So there's a common confusion about renderbuffers vs framebuffers. Framebuffers don't actually contain any pixel data themselves. (I can't stress this point enough) Renderbuffers and textures contain pixel data, and you can attach these two things to framebuffers, so that when you draw into a framebuffer, the resulting pixel data ends up in the renderbuffers or textures you attached. Likewise, calling ReadPixels reads back data from the renderbuffer/texture at the attachment point in the current framebuffer indicated by the |format| argument.
Just bind the framebuffer that has the renderbuffer (which you want to read from) attached as the stencil attachment, and call glReadPixels with a format of GL_STENCIL_INDEX or GL_DEPTH_COMPONENT.
See the docs for ReadPixels here.
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GL_DEPTH);
<code>glutInitDisplayMode()</code> doesn't accept GL_DEPTH
as an argument. Try GLUT_DEPTH
.
Ack, no! Just pass the pointer from glMapBufferARB()
to your secondary thread to fill, then back to your primary for glUnmapBufferARB()
. Double-buffering optional.
The Arcsynthesis tutorial was really helpful when I was starting out. It's targeting modern desktop OpenGL, so it refers to and makes use of a couple of features that don't exist in OpenGL ES 2.0, but I didn't find that too much of a problem.
Another thing that I felt was very useful was supplementing a practical tutorial with a more theoretical handbook. I used Essential Mathematics for Games & Interactive Applications (2nd ed., Van Verth & Bishop) and can easily recommend it.
One more thing I would like to mention to the OP: He might want to consider using a ready-made engine, such as Unity instead of straight-up OpenGL. Of course, starting from scratch is better for learning the fundamentals.
Godot Engine is FOSS, supports linux as a first class citizen, and can export games to Mac/Win/Linux/Android/iOS and I think web. It's development progress has been huge over the last year and is getting PBR rendering capability.
Not saying pursuing not to pursue OpenGL, but there are options out there for game engines on linux.
I too is OpenGL noob so these are all i can tell you. I also took a week to draw a cube and weeks to make frame buffer to draw shadow. Every night sleeping with my opengl program not drawing any thing is not pleasant memory. But after you get more comfortable with opengl, you will find how exciting to implement more advanced features like lighting, drawing shadow, normal mapping, etc. you can read my pygame, pyopengl code if you are interested in.
>Everyone has been there
No kidding :) Check out the very first question on the OpenGL Usenet group in 1995: https://groups.google.com/forum/#!original/comp.graphics.api.opengl/bicuxkMMcpQ/2n7G0oKsHpkJ
Be careful putting glDeleteBuffers
in your destructor. The destructor will be called prematurely if you use a copy constructor like this:
std::vector<Chunk> chunks; chunks.push_back(Chunk()); Here is an example of this behavior.
Try commenting out your glDeleteBuffers
call to see if it solves the problem. If it does, you can add an explicit Destroy()
member function to your chunk type to be called exactly when you would like.
The AMD optimizer has a LOT of issues regarding variables. For example, if as a uniform block (GL 3) you create uniform { float a; float b; } he will optimize that as a vec2, rendering your a variable inexistant. I also programmed a multilight phong shader in GLSL running on a 5830, indices are set as a warning during the compilation but it works (you can see the code here : phong shader
Very similar concepts. Especially with BVH. I doubt you'd have to switch.
This is a very informative post: https://stackoverflow.com/questions/4326332/could-anyone-tell-me-whats-the-difference-between-kd-tree-and-r-tree
You should be doing the latter.
Have a read of these discussions:
https://stackoverflow.com/questions/8923174/opengl-vao-best-practices
http://www.swiftless.com/tutorials/opengl4/4-opengl-4-vao.html
Here's some code that does instancing in one of our applications at work. Nothing secret, but this is known good code. here.
So, I have a quad that's defined by prototypeAttributes
. Those are the only attributes that are truly per-vert, and they only define a kind of "unit tile"--a tile that's 1x1. You've also got your standard two-right-triangle quad indices. Additionally, I have some uniforms in the shader that determines the physical size of the tiles and the z-value of that layer. If you have tiles of different physical sizes or depths, you could stuff this into the instance information, no problem.
Then I have a vector of attributes for each instance defined by a vbo of VITileAttributes
. x/y are just the position in the grid that this tile occurs, and z is the index into a texture array containing the tile textures. (If you use the texture arrays, you can do real clever tile swapping with glTexSubImage3D
. It's so much better than a stitched texture atlas.)
The trick for instanced rendering is that each one of the invocations of the vertex shader within a particular instance is going to get the same value for the instanced vertex attribute. It's going to act kind of like a uniform, even though the way you set everything up is for an attribute. Between the real shader uniforms telling me layer and tile geometry, and the instance attributes telling me where this tile is in relation to the overall grid, I can reconstruct everything necessary to transform the verts and shade the fragments.
If you specifically target 4.5 you would remove a very large part of users since not only do they need a quite new card, they also need to have updated drivers for it. My own stats currently count 4.5 into the "other" category so the users of my music visualizer that actually have OpenGL 4.5 is lower than 13%. I would say target 3.3 as suggested by others.
g++.exe: error: unrecognized command line option '--subsystem,windows'
I did it a different way though and got it "working"
I got freeglut working using a findfreeglut.cmake file that I found in some dudes github.
But now I'm getting a different error: https://hastebin.com/cuzohiwada.vbs. Looks to be a problem with my GLEW. But why?
Seconding /u/sepharoth213 which is probably the best thing to try first. However if it's still too slow you could try out Rust, which aims to be a modern alternative to C++ while maintaining similar performance. There are SDL2, GLFW, Qt, etc. bindings for Rust but there's also cool OpenGL wrappers like glium that you can check out.
Don't bother building stuff like that on Mac. Choose MacPorts or Homebrew (or both) and install pretty much whatever you like in seconds, including the deps.
What will really trip you up is including the .dylib
s in your .app
as macOS has a funny way of finding its dynamic libraries. You and install_name_tool
sitting in a tree, ...
If possible, upgrade your PC. I'm assuming that's not an option since you're posting here, but it's very hard to learn graphics without being able to actually run your code.
If you can't get a GPU which supports modern OpenGL, the next best thing would be to find a software renderer which implements OpenGL. Mesa is one such implementation. It uses a GPU if it's available, but falls back to software rendering if a GPU isn't available. Bear in mind that software rendering is almost always slower than hardware rendering, but it should work
The challenge will be trying to get the benefit of all the traditional website markup with WebGL which is just about rendering polygons and textures.
If I were doing this, I'd pre-make the website traditionally (with all the styling), then render portions of it into a Canvas to create textures which then get rendered onto surfaces in a 3D environment.
threejs is super helpful for doing high-level 3D things without learning low-level OpenGL ES calls.
Tracking interaction events (clicks and such) would still be tricky but doable (you'll need to convert camera space interactions into world space and associate it back to the original markup).
Depending how complex of a prototype you have in mind, you could probably have something cool done in six months. :) Could probably save a lot of time by building this with a very specific design/markup layout in mind, rather than a fully generic solution.
You use the callback provided by glfw, which is glfwSetCursorPosCallback. Documentation can be found here: http://www.glfw.org/docs/latest/group__input.html#ga7dad39486f2c7591af7fb25134a2501d
As full example I have during bootup of my program:
> glfwSetCursorPosCallback(window, CBMouseMove); // you do this once
and then a function:
> void Input::CBMouseMove(GLFWwindow* window, double posX, double posY) { // do stuff here }
try this link . I setup a basic glew/glut program which does nothing.
Unzip it to your msys 'home' directory and type 'make'. If it doesn't work, there may be something wrong with your compiler/overall setup. If it does work, read below.
To check what it is you forgot to do, open the 'makefile' file and compare the compile arguments with the ones generated by your IDE (assuming you know how it works...i'm not sure how pro you are). I shoved everything in the root folder just for testing purposes...don't do this.
^ i don't really know what i'm doing myself but having a working version to compare against helps
in any case, i would still reccomend using glfw because it gives you more control over your program, and is being developed even today unlike glut.
It looks like you've compiled your own code in debug mode but told MSVC to link against the release version of the library (sfml-system-s.lib as opposed to sfml-system-s-d.lib etc). As I understand it, this doesn't work well in the Windows world (the standard library has a different ABI between the two).
Have you seen SFML's own guide? It mentions these details, as well as another step you should do for the static ("-s") libraries: define SFML_STATIC in your project.
Did you remember to request a stencil buffer before creating your window and/or context?
The sfml documentation suggests the default doesn't have one (perhaps it's version-dependent?): http://www.sfml-dev.org/tutorials/2.0/window-opengl.php.
It's also the kind of thing VOGL is unlikely to catch, intuitively.
Edit: I see you're using SDL. There (well, sdl2), I definitely had to set the SDL_GL_STENCIL_SIZE attribute before creating the window.
Yeah I call my GUI first. I needed it stay in place, but I didn't know any other way except to draw it before my gluLookAt.
I'm calling setup() in my main: http://codepad.org/4B7z1JTI
I set my clear colour to 0,0,0 so that it was easier to look at the screen. White background can hurt the eyes after a while
-Alright. That is easy enough to fix the orientation then.
-Here is my image: http://imgur.com/hi3sdY4
In my draw I believe I map the image and then render the verticies: http://codepad.org/Zwh9E3qi
-I don't quite understand.
-I have modified my loop that reverses the bmps bgr to rgb by making it go abgr to rgba. My function looks like this now: http://codepad.org/6Iyh8cI2
-After fixing the loop it has stopped the error and will allow it to run.
-It is displaying the correct colour now and stopped breaking, but it is still showing a white background: http://imgur.com/tjCv1tY
Any ideas?
You should check out libcinder
It's very well designed, very easy to work with. Has excellent geometry tools and good high level abstractions atop OpenGL, but leaves you the ability to roll up your sleeves when you need to.
It's likely that you're simply screwing up your shutdown. For instance, trying to destroy the context twice. Or, perhaps you simply have some screwed up destructors somewhere that are corrupting only tangentially-related memory. You may also be destroying resources that OpenGL expects before the driver is actually done with them. You may even have uninitialized memory.
Errors at shutdown are usually bigger programming defects that are showing through.
EDIT: What does <code>valgrind</code> say about your program?
The actual tessellator is fixed function and is just sandwiched between the control and evaluation stages. From http://www.opengl.org/wiki/Tessellation,
"The primitive generation part of the tessellation stage is completely blind to the actual vertex coordinates and other patch data."
The section "Abstract Patch" ( on the same page ) has more information on the process, in particular stating "the primitive generator evaluates a different number of tessellation levels and applies different tessellation algorithms".
To me this says you get what you get. Obviously you have some configuration options and "capability to suggest" in the control shader, but it doesn't look like you're going to be able to get the results shown in your image regardless of how you configure the tessellation values.
As I'm sure you're aware, you can do a lot to the data inside the evaluation shader, but I gather you didn't want to do this manually. I certainly wouldn't want to do it manually either, but I don't think you have much choice in this case.
At first glance, and I haven't had much time to consider this problem, it seems like ( at least for this case ) you'd be better off doing the tessellation in the geometry shader, although I know you're currently using it to do other things ( and of course, I would want to use the tessellation feature to do this were I writing it myself ).
You are probably aware, but it is entirely possible for the z buffer test to occur before the fragment shader has been executed, which is desirable for speed. http://www.opengl.org/wiki/Early_Depth_Test
in pseudo code:
draw 3D scene setup projection and/or modelmatrices for drawing in 2D or infront of camera.
glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); // additive blending float blendFactor = ...interpolate blendFactor from 1 to 0 glColor3f( blendFactor ,0,0); // when blendFactor = 0, the quad won't be visible. When blendFactor=1, the scene will be bathed in redness draw_fullscreen_quad();
All the possible arguments you can use with blendFunc. glBlendFunc
Use a timer or some timefunction to "animate" (interpolate) blendFactor from 0 to 1 or 1 to 0.
Edit: Forgot blending over time.
What's not to understand about us people? If you saw some guy coming in here asking for some OpenGL binding for assembler wouldn't you ask him why and if he told you he thinks that's what's modern - wouldn't you gently point out to him that newer methods exists?
This way people don't have to waste their time learning something outdated.
And to answer your initial question as to where the NeHe tutorials don't comply to the latest OpenGL specification we can look at the first NeHe tutorial with code in it HERE and then read THIS. I now hope you will think twice before recommending NeHe again - if there are issues this early in the tutorial how outdated do you think the rest of the tutorials are...?
The markup syntax used by Reddit (and many other sites). A quick Google would have given you its homepage https://daringfireball.net/projects/markdown/syntax
and if you look toward the bottom right of the input box of Reddit there's a link "formatting help". If you click it tells you how to format code.
I have recently been mucking around with the same stuff: http://bitbucket.org/photex/ridetumbler
Doesnt do anything complicated though.
It runs fine under my osx partition which sports a sad ogl2 driver as well as my linux partition which is ogl 3.3.
I think the use for circular interpolation is motion blending in animation. Nothing to do with textures, not directly anyway.
LaTeX (or similar) is a must for making documents. Definitely a pain in the butt at first, but once you are over the learning curve it is very useful.
3D graphics is a few things, fundamental to all of them is understanding the mathematics of Linear Algebra, meaning vectors and matrices, and efficient implementations of them as geometry pipelines in the separate environments of a) tools with no visualization, b) CPU executed geometry and texture pipelines with optional partial GPU assist, also no visualization but still engaging with GPUs and OpenGL, c) CPU assisting GPU based applications, which may be desktop compiled or web/browser based and d) new hybrids combining everything previous with compute shaders, wasm, and kitchen sinks. Across all these environments and languages are different methods of working with vectors and matrices to produce geometry and geometry transformation (we call animation) pipelines. Fancy geometry pipelines have integrated physics and collision engines, which are also vector and matrix transformations.
I'd also recommend some additional reference sources as you teach yourself OpenGL and 3D: this book is considered the 3D graphics bible: https://www.amazon.com/Computer-Graphics-Principles-Practice-3rd/dp/0321399528/ref=pd_sbs_1/131-8595678-4937945?pd_rd_w=NL5Sd&pf_rd_p=f8e24c42-8be0-4374-84aa-bb08fd897453&pf_rd_r=HJTFQB8K7HNM1F4GNWCB&pd_rd_r=b611597e-3a74-45c9-8330-897... It covers the entire field and will have a reference for everything worth knowing, often by the original authors. Then try to locate the decades old Transactions on Computer Graphics https://dl.acm.org/journal/tog That's the original algorithms written on decades old hardware, which re implemented on today's hardware simply screams.
Always remember that OpenGL is just one implementation of doing 3D, which has countless ways of connecting its internals, but all that devolves into just working with vectors and matrices in efficient manners. If you do 3D in your career, it may not have OpenGL available, but that should not stop you.
You are most welcome!
In regards to learning c/c++, Do you build games or non-game apps? Desktop/laptop, mobile, or web apps? Perhaps a mix of some or all. It’ll help me focus resources to suggest if I have a bit of background, otherwise I’d go straight to the source C++ Primer Plus.
Stephen Prata put together a good resource that can double as a blunging weapon. It’s a bit dry but gives a good base, anymore knowledge you’d need on a topic covered in the book could be found through an internet search.
I learned with that book in combination with the Gang of Four Design Patterns book.
Edit: added web apps as a possibility.
No, you don't, at least not for simple stuff. But for anything complex, you'll want to use C or C++.
And you should definitely learn C, it's an easy language to learn in that there isn't much to learn. Everything you need to know about it fits in a book with less than 300 pages (Kernighan & Ritchie: The C Programming Language), and it only takes a month or two to learn all the stuff if you study it for a few hours a week.
And once you know C, switching to C++ is very easy because it's basically just C with extra features and some syntax changes/additions.
Tougher topic, tbh. I mostly use C++ (<3 u C++14/17) which has a different approach to architecture, even, given the more object-oriented approach.
Architecturally, I'm somewhat content with my Vulkan rendering library - maybe it'll give you a few ideas (assuming you're working on a rendering system, that is): http://fuchstraumer.github.io/vulpes-docs/index.html
Otherwise, when it comes to architectural concepts, these can be learned most of the time by using other people's code from open-source projects and "transcribing" it in your words, or using it in your project. In this case, looking for OpenGL rendering libraries that use C on github is probably a safe bet. Books like "Clean Code" also espouse concepts that help with making a clean and functional architecture easier, too (libgen.io ftw, fyi)
Thankyou for your reply! I don't want to consider it as "work" but there certainly is a lot to learn and a lot to do, all for which I'm going ham. I'll definitely check out and read that book you linked, I'm actually floating on a small backlog of programming books; skimming through "The C++ Programming Language Special Ed" by the man him self Mr Stroustrup, as well as "Effective C++ third Ed" by Scott Meyers, as some light reading in my spare time. I'm very stoked by the response I'm getting from this crowd of gentlemen and I feel as if I've learned alot from this thread.
So I’ve not heard of that book but it was a fairly hot “cutting edge” thing around that time.
However I think This book
“Texturing and Modeling a Procedural Approach”
https://www.amazon.com/Texturing-Modeling-Third-Procedural-Approach/dp/1558608486
Is something you should 100% checkout.
The Author list alone should be a clue:
David S. Ebert Kenton Musgrave Darwyn Peachey Ken Perlin Steven Worley
It’s all in that same period, we’ve had 20+ years to figure things out and realize GPUs aren’t just for color blending, so you’ll see a lot pure CPU etc etc.
But it’s an amazing look into what was the cutting edge both conceptually and in hardware at the time.
Much of it holds up and sometimes it’s a crazy side idea from a 10 year old paper that you need to inspire you.
I find it even more interesting in my work with 3d on the web. Even WE lowly WEB 3D devs have more tech than they do sometimes, however all our “traditional” or “previous work” was already accustomed to multiple compute shaders, multi-threaded cpu patterns, etc etc. that core fundamentals or less flashy but actually more optimal solutions existed
I can recommend the OpenGL SuperBible 7th edition. It covers glMultiDrawElementsIndirect, compute shaders, and shader storage blocks.
Maybe that helps a bit. DSA is pretty straightforward. https://github.com/fendevel/Guide-to-Modern-OpenGL-Functions#where-is-gltextureimage . There is a lot more to modern OpenGL than just DSA. If you want to deep dive into modern rendering (and modern OpenGL) I can suggest https://www.amazon.de/Graphics-Rendering-Cookbook-comprehensive-algorithms/dp/1838986197/ref=mp_s_a_1_1?crid=NLRU5UJZT0IH&keywords=modern+rendering+cookbook&qid=1649074763&sprefix=modern+rendering+cookbook%2Caps%2C163&sr=8-1
>indeed. see: https://github.com/CaffeineMC/sodium-fabric/issues/1100, an issue i opened on this debacle
I fully understand why Sodium is dropping support for old OpenGL versions. (for various reasons) However, I support the continued support of macOS by Sodium or a Sodium fork (oldium xD), especially if the new Sodium version (next) can be used through this translation layer.
​
this seems to not be making OpenGL drivers for macOS but rather getting linux to work on Apple Silicon. (If I wanted to use a different OS, all of which have better OpenGL support than macOS, I would do so.)
Yes I suppose you're right - if there is only one triangle generated per point! Brain fart. But thinking about it, I can't generate a spherical shape if this is the case. Looking at an Icosahedron you need 12 vertices to make 20 faces and with an Octohedron 8 faces from 6 vertices.
So I need to change my parameters and allow multiple triangles per point. Cheating and going the other way - making an Icosphere and counting the indices and vertices, with 1 recursions I get 240 indices from 42 vertices.
With 2 recursions I get 960 indices and 162 vertices
3: 3840 indices and 642 vertices.
I think I'm unlikely to have more than 150 points in this sim but given you have pointed out how small an amount of VRAM were talking I'll just allocate space for 3840 indices and move on.
Spec or not, the fact of the matter is that Intel does not save the index buffer binding with the VAO state. This was true five years ago and the last time I personally checked was around six months ago on the latest Skylake drivers at the time.
See: https://stackoverflow.com/questions/8973690/vao-and-element-array-buffer-state
Indeed, and that should have a dependency on mesa or some other driver and would pull those in too. See here, that's for Ubuntu so if your on another distro...
https://www.slideshare.net/DevCentralAMD/vertex-shader-tricks-bill-bilodeau see slides 18 and 19 specifically. It surprised me too the first time I read about it, but I've seen it multiple places, and am currently working on a benchmark for how to draw sprites with the best performance.
This is correct, look here:
https://www.archlinux.org/packages/community/x86_64/glfw-x11/
Under "Package contents" you would expect to find a .a
file if it contained a static library, but it only has .so
files.
Superbible with all the cons been describes here. Also the red and orange books. The OpenGL cookbook https://www.packtpub.com/game-development/opengl-development-cookbook might be a good read too. I agree thar learnopengl.con is an amazing resource, and open.gl as well.
And for an introduction to 3d graphics I like the udacity course https://www.udacity.com/course/interactive-3d-graphics--cs291 its wegl with threejs, but it covers the basics without entering into the painful c/c++ world.
If your source files have .cpp extension and precompiled header files are enabled, those together are probably causing the errors. See https://stackoverflow.com/questions/25828208/unknown-type-name-nserror-and-others.
GLU is implemented with OpenGL 1.x specification. You said you'd like to batch the draw calls, which is not possible with OpenGL 1.x. It uses immediate drawing, which is not optimal performance-wise. With OpenGL 3.x and later you can batch the geometry into a graphics buffer and tell OpenGL to draw the contents of that buffer when you have enough stuff in it.
Here's a good explanation: https://stackoverflow.com/questions/6733934/what-does-immediate-mode-mean-in-opengl
>How do we test exported files of OBJ (or other file formats) from software that we don't have or reasonably can't afford such as 3DSMax or Maya?
As for making a single index array, it's not that complicated. To do it efficiently for huge models you're going to need to involve a data structure for fast key lookups like a binary search tree, but other than that it's quite simple: - Load the obj attribute arrays: obj_v, obj_vt, obj_vn - Create a key-value lookup data structure with the key being an face-vertex vidx/vtidx/vnidx, and the value being the index of it in your final single vertex buffer. Starts empty. - For each face-vertex lookup the corresponding index. If it exists append it to the single index array, if it doesn't insert it in the data structure, append the new index to the single index array, and append all the corresponding obj_v[vidx], obj_vt[vtidx], obj_vn[vnidx] attributes referenced by this facevertex to the "unified" GL vertex arrays.
That's it. Don't forget to take negative indices into account in the process, because it's too bizarre a feature not to implement :)
Why are you using XMing? Unless you're running it over SSH, you're better off compiling it with a compiler that targets windows.
I recommend installing MSYS2, it uses a windows port of Arch Linux's pacman as its package manager and it contains up to date packages and compilers.
Check out this page to get started., looking at it it's a bit out of date, since you can now do pacman -Syu
without worrying.
I really liked OpenGL 4 Shading Language cookbook.
It speaks about a bit of everything and the examples are neat.
there is an AUR package for glfw3 which provides libglfw.so, so if you install that you should be able to link against it with "-lglfw". there appear to be separate packages for x11 and wayland, and the link above is for x11, so if you are using wayland you need to get the glfw-wayland package instead.
Thanks. So it's not another object but a mode of access.
I think I was confusing it with rectangle textures (just found this SO question, but their normal access do filtering. And they aren't another object anyway, just another type of texture.
appgamekit uses GLSL for its shaders: https://www.appgamekit.com/documentation/guides/13_shaders.htm
The shader you linked contains both the vertex and pixel shader in a single file with the .glsl extension. The simplest way to include it is to create these two files:
crt-pi.vs #define VERTEX 1 #include "crt-pi.glsl"
crt-pi.ps #define FRAGMENT 1 #include "crt-pi.glsl"
Hey there. Because I'm an idiot I read my values incorrectly. My Vertex struct reads all my data in correctly in my Init.
I have a question about what the next step is though. After it reads in all the data how would I get the model to animate?
I have another question about how I draw my mash to the screen using your method. I am trying to plug in my values, but the mesh appears all wrong. To test it I'm drawing a cube. This is what happens though: http://i.imgur.com/edaLQ1o.png
This is how I am currently drawing it: http://ideone.com/S9rjzk
Any idea why it isn't drawing properly?
I understand that, but unless you are an expert in it you aren't goign to understand. My best example is trying to look at Wikipedia to understand math formulas. There is so much jagon it is unreadable.
That makes sense. It would probably be better to have 1 variable to store everything instead of multiple. I have added that struct to my class, but I am having trouble getting it to read my data correctly to this struct array. It seems it reread parts of the file.
This is my updated code. What am i missing?
Also I have converted your pseudocode to code. Did I implement it correctly?
I understand that, but the problem with it though is unless you are an expert within it it is really difficult to understand. It's like looking at Wikipedia for math formulas. There is so much jargon it is unreadable.
I can understand that. Makes sense to have 1 variable stored instead of multiple. I have added that struct to my class and am trying to get it to work correctly, but I am having trouble getting it to work. It seems to read in the same value twice. This is my code currently. Any idea what I am missing?
Also I have turned your pseudocode into code. Did I impliment it correctly?
I'm afraid I am still lost.
With your vertex format is that all within my class or is that all contained within a struct? I really don't understand how you set up your variables.
I have an inverse of the bone offset matrix forking, but I don't quite know how I call the double for loop.
Please don't call it "simple" or "obvious." Just because you understand doesn't mean that I do. I'm sorry if I sound rude. I really don't mean to. It's just I have been trying to get this to work for months and overly frustrated with this library. I have been rereading your answer, but I still don't understand how to get this to work. There is a lot of technical jargon in it and I feel I keep misreading it becaue of that
This is about as far as I understand and even then I still think it is wrong (the lines I have added if commented out and has been tabbed). Is what I have so far correct? http://ideone.com/FEibCO
Can't help you with visual studio I'm afraid, but if I may offer an alternative. Fuck visual studio and use nice free software tools like GCC and a nice UNIX shell. You can install all this extremely easily with the msys2 installer. Further bonus: it comes with a package manager that you can use to install text editors like vim, build tools like GNU make and cmake, and any library you care to use like freeglut and SDL2, with a simple command and no further fuss. Here's the link: http://www.msys2.org/
http://threejs.org/docs/index.html#Manual/Introduction/Creating_a_scene the devs themselves call it a library. I guess i was thinking of a game engine, vs an engine itself. I haven't heard the term "rendering engine" or "graphics engine", so I wouldn't know how to relate. But it doesn't qualify as a game engine, as it has no components for making a game of sorts. There's definitely third party stuff for it.
Unless you'd like to put a lot of unnecessary effort into re-inventing the wheel, you should definitely use some UI toolkit. Given the rather advanced requirements (a cross-platform media file browser needs a lot of UI design work, plus you'd want a UI library that integrates well with libraries for showing/playing media), it would be reasonable to try out five or more toolkits before committing to one. If you'd like to pull off some snazzy effects, it could make sense to pick one that's built on OpenGL and makes it reasonably easy to work with OpenGL directly in those few isolated spots where that would actually be required. (E.g. Kivy could qualify there; I'm reluctant to give a recommendation because I haven't tried it yet.)
Even with the perfect set of libraries, you'd be looking at a huge amount of work. Why not just hack on something like Kodi to make it do what you want? Those folks have already done so much great work; starting from scratch, you could put 10,000 hours into your own project and still be nowhere close to catching up.
Well for a concrete exemple you could take a look at how a window is created using both libraries. In SFML it's pretty straightforward, you create a sf::Window object, you can wrap it, and it'll be destroyed when it goes out of scope. In GLFW on the other hand, you have to call glfwCreateWindow that returns a pointer, and then call glfwDestroyWindow when you are done with it. When binding stuff with luabind I have to make sure that each object can be held inside a shared_ptr in order to be properly GC, so I would need to write a wrapper for GLFW while SFML works out of the box. By more object oriented I mean that I prefer having the full object ownership as oppose to having a reference to it. Also as mentioned above, having a sf::Event object instead of callbacks.
> using glew and freeglut
If you're looking to make something outside your class in OpenGL from scratch using c/c++, try glfw for setting up the window/opengl environment. Their example code shows how simple it is.
I went here and clicked the download link. It's GLFW 3.1.1 for windows 64bit (at least for me it was). That gives you a whole bunch of different lib folders and an include folder (as well as some others). I simply copied the files from the include folder, and copied the files in the folder titled "lib-vc2013". My computer is Windows 8.1 64 bit.I'm running Visual Studio Community 2013.
Looks like the tutorial you're working with is using GLFW 2.X. When version 3 came out they changed the API a ton, which is what's causing your compilation problems. For the most part, the function names are all the same, but they take a GLFWwindow
argument instead of just working with the current window.
You can find the reference documentation here. Just look for any of the functions that are throwing errors and see what the new version of the functions look like.