Always happy to help (and share knowledge and resources)!
Yeah, going from 25 FPS to 30 FPS is a signal flow nightmare, especially when you consider interlacing.
To answer your question about upscaling from 1080p to 4K: Mathematically, it just doubles the size of the image to use four times the pixels. 1080p has a resolution of 1920x1080. 4K has a resolution of 3840x2160 - twice the width, twice the height. My two cents is it isn't worth it, but I've learned though this sub that it supposedly works wonders for YouTube compression. To be fair, I mostly work in TV and film, not web content, so I'm a little out of the loop on best web content compression practices.
To expand a lot (but not too much) on my previous answer:
Way back when television was first being introduced, a TV couldn't draw a whole frame at once due to the bandwidth required. Instead, it would draw half the horizontal lines, then the other half. If it was in HD, it would be like drawing a 1920x540 image twice. Each "pass" in this method of "scanning" a picture is called a field. This is called interlacing, commonly abbreviated as "i" after a frame rate or resolution. (59.94i, 1080i, etc.) SD TV was typically broadcast in one of two standards: NTSC (29.97i) or PAL (25i).
Which one you use depends on where you are in the world - Most of North and South America typically used NTSC, while most of Europe, Asia, and Australia used PAL or SECAM (which are, for most intents and purposes, the same thing). These are now being phased out as HD and 4K standards are developed.
Computers nowadays can do "Progressive" video, where it draws the whole image in one scan. This has become the standard for HD*, 4K, and more, because there is less artifacting and weird line blur (see my original comment for an example of an interlaced frame). This is commonly abbreviated as "p" after a frame rate or resolution (1080p, 29.97p, etc.)
Converting from Interlaced to Progressive ("De-Interlacing") can introduce artifacting or weird line blurring. Going from Progressive to Interlaced can also introduce weird line blurring. The Studio version of Resolve can handle this well, but it's locked because most people using the free version don't care and won't need it. Resolve can introduce interlacing artifacts at inopportune times, because normally that workflow is severely controlled.
To go a little bit into what media distribution standards are: 99.99% of movies you see in theaters are 24fps. Most TV shows** film at 23.976fps because, mathematically, it lines up with 29.97fps with little to no speed changes, and conversions need to be made for broadcast. The web doesn't care what you give it***, but recording at your region's "native" frame rate (24 or 25) is probably best. If you're mixing with gameplay footage, 25, 30, 50, or 60 may be your best bet.
*HD video can still be interlaced. 4K and up becomes less common and not supported in most official formats.
**In the US/North America, at least. I don't know much about the rest of the world.
***As long as it's the right codec, which is a whole other ball of wax, but H.264 should be good.
very tl;dr, much attempt at humor: 1080p to 4K upscaling for Web Content? Fairly common. 1080p to 4K upscaling for TV/Movies? Very uncommon. For everything else, ~~there's MasterC~~ blame broadcast.
For further reading (if this interests you): How Video Works: From Broadcast to the Cloud