Adding a large number of IPv4 entries for a host in /etc/hosts and then querying it results in a segmentation fault.
Steps to Reproduce:
1. Create 50K '127.0.0.1 host-fubar' entries, and 50K '::1 host-fubar' entries in '/etc/hosts'.
2. Call getaddrinfo for 'node' = "host-fubar", no 'flags' set, and AF_INET in 'hints->ai_family'.
Patch coming up.
Fixed in master:
Author: Siddhesh Poyarekar <firstname.lastname@example.org>
Date: Wed Oct 30 16:13:37 2013 +0530
Fix reads for sizes larger than INT_MAX in AF_INET lookup
Currently for AF_INET lookups from the hosts file, buffer sizes larger
than INT_MAX silently overflow and may result in access beyond bounds
of a buffer. This happens when the number of results in an AF_INET
lookup in /etc/hosts are very large.
There are two aspects to the problem. One problem is that the size
computed from the buffer size is stored into an int, which results in
overflow for large sizes. Additionally, even if this size was
expanded, the function used to read content into the buffer (fgets)
accepts only int sizes. As a result, the fix is to have a function
wrap around fgets that calls it multiple times with int sizes if
ChangeLog | 8 ++++++++
NEWS | 2 +-
nss/nss_files/files-XXX.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++--------
3 files changed, 60 insertions(+), 9 deletions(-)
Siddhesh, is this triggerable without editing /etc/hosts? In other words, does this cross a security boundary?
This specific problem is only triggered by editing /etc/hosts. Given that the root cause was related to reading in the file, I'd say this does not have an impact outside of /etc/hosts, so it should not be a security issue.