This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [OMPI devel] [PATCH v5 0/4] vm: add a syscall to map a process memory into a pipe


All MPI implementations have support for using CMA to transfer data between local processes. The performance is fairly good (not as good as XPMEM) but the interface limits what we can do with to remote process memory (no atomics). I have not heard about this new proposal. What is the benefit of the proposed calls over the existing calls?

-Nathan

> On Feb 26, 2018, at 2:02 AM, Pavel Emelyanov <xemul@virtuozzo.com> wrote:
> 
> On 02/21/2018 03:44 AM, Andrew Morton wrote:
>> On Tue,  9 Jan 2018 08:30:49 +0200 Mike Rapoport <rppt@linux.vnet.ibm.com> wrote:
>> 
>>> This patches introduces new process_vmsplice system call that combines
>>> functionality of process_vm_read and vmsplice.
>> 
>> All seems fairly strightforward.  The big question is: do we know that
>> people will actually use this, and get sufficient value from it to
>> justify its addition?
> 
> Yes, that's what bothers us a lot too :) I've tried to start with finding out if anyone
> used the sys_read/write_process_vm() calls, but failed :( Does anybody know how popular
> these syscalls are? If its users operate on big amount of memory, they could benefit from
> the proposed splice extension.
> 
> -- Pavel
> _______________________________________________
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel

Attachment: signature.asc
Description: Message signed with OpenPGP


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]