[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shm::RawPool is not safe if used with screencopy #320

Open
ids1024 opened this issue Nov 17, 2022 · 8 comments
Open

shm::RawPool is not safe if used with screencopy #320

ids1024 opened this issue Nov 17, 2022 · 8 comments

Comments

@ids1024
Copy link
Member
ids1024 commented Nov 17, 2022

shm::RawPool allows reading and writing memory in the pool, or accessing it as a slice, while that memory is part of a wl_buffer::WlBuffer.

This doesn't produce any soundness issues (on the client side) when the buffer is attached to a surface, but with screencopy the server rights to the buffer. And generated bindings for a protocol like that also don't mark methods as unsafe, so this allows UB with safe code.

Potentially this could also cause soundness issues on the compositor side (with screencopy or attaching to a surface). Which is difficult to deal with since it should handle broken or malicious clients....

The API should probably either make certain things unsafe or ensure it's impossible to construct a slice from memory that is part of a wl_buffer.

@i509VCB
Copy link
Member
i509VCB commented Nov 24, 2022

I guess we could make RawPool's slice methods to be unsafe. This would mean io::Read and io::Write would need to go of course.

I'd probably avoid trying to insert slot based access checking in RawPool. We have SlotPool for that

@danieldg
Copy link
Contributor

Note that if we wanted to be completely strict with safety, we could not allow access via safe &mut [u8] to any shm buffer, including SlotPool and the others, because the compositor could in theory write to any of them at any time. You might argue that it's fine if the compositor is trusted, but as this issue brought up, it's possible for a new protocol to reuse an object (wl_buffer) and decide to define correct behavior to include writes to the memory.

The same is true on the compositor side; while the protocol does forbid clients from modifying a buffer that is attached (prior to the release event), the compositor has even less reason to trust clients, and so likely should not expose the shm area as a &[u8] to rust.

Exposing all shared memory as &[AtomicU8] would be a safe but fairly useless (and badly-performing) API, unless rustc is just as capable of promoting a loop of Relaxed loads/stores to a memcpy like it does with &mut [u8].

@ids1024
Copy link
Member Author
ids1024 commented Feb 25, 2024

Yep.

On the compositor side, I tried to improve this a bit with Smithay/smithay#851. Though further auditing of the code dealing with shm mappings is welcome. The compositor should attempt to be secure against malicious clients.

On the client side, it is mostly reasonable, if not ideal, for the client to trust the compositor it is connected to. Not being able to use normal slices isn't really a viable option for providing an ergonomic and performant API for something like Softbuffer. &[AtomicU8] is kind of interesting. I've never really thought about using atomics like that for inter-process communication, but I guess it's #[repr(C)] and works with multiple threads, so that should be valid. But yeah, not really a good option here.

What we might be able to do, though, is get better security with platform specific mechanisms. F_SEAL_FUTURE_WRITE may be useful here, as documented in fcntl(2):

The effect of this seal is similar to F_SEAL_WRITE, but the contents of the file can still be modified via shared writable mappings that were created prior to the seal being set. Any attempt to create a new writable mapping on the file via mmap(2) will fail with EPERM. Likewise, an attempt to write to the file via write(2) will fail with EPERM.

Using this seal, one process can create a memory buffer that it can continue to modify while sharing that buffer on a "read-only" basis with other processes.

That sounds viable here, as long as the client only needs a single mapping and not a writable fd. And you never want the buffer to be written to by the server (so not for screencopy).

FreeBSD doesn't seem to have that option, but there is https://man.freebsd.org/cgi/man.cgi?cap_rights_limit. From the man page, it's unclear to me if that only works for a process in "capability mode" or if it's a separate feature often used together with that.

@danieldg
Copy link
Contributor

Note that using F_SEAL_FUTURE_WRITE will very likely break pool resizing, which is a very useful feature for most of the shm APIs.

@ids1024
Copy link
Member Author
ids1024 commented Feb 25, 2024

Yeah, true. I forgot about that part. I think that's why I haven't tried to use it.

I'm not aware of any Linux API that would allow the client to resize the shm buffer, while not allowing the server to read from it. Though it seems like something that should exist.

@i509VCB
Copy link
Member
i509VCB commented Feb 25, 2024

Note that using F_SEAL_FUTURE_WRITE will very likely break pool resizing, which is a very useful feature for most of the shm APIs.

There is a proposal upstream which would actually remove resizing in a future version of the protocol: https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/161

@ids1024
Copy link
Member Author
ids1024 commented Feb 25, 2024

I guess (at least on Linux) a client could just allocate a memfd much larger than it needs, and never resize the pool. I'm not sure there's any harm in doing that. I believe it won't actually allocate that much physical memory, and will just map in a zeroed page for reads, and than trap on writes and map in new pages as needed.

@danieldg
Copy link
Contributor

I expect if that protocol change was merged, we would need to have the pool re-submit the memfd and create a new shm object to implement resize. That would only cause problems if applications held on to the pool object, which is not likely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants