Skip to content

apt Update signature fail + info in thread (IMPORTANT

Hello there,

I’ve been doing r3v.3r.se 3ngin.3ri.ng and d1.gi.t@l F0rE.ns1c.s for quite a while… Started with $tuxN3t back in the 200-9.

I am fl@gg3.d or someone got me really good and I am working on current 3xpl01.ts aff3cting BIOS (may be hardware for BIOS), Unix-based systems Linux and MacOS (some systems just take longer for the migration to complete), iOS and Android. In my book it’s enought to raise eyebrows and if this is not completely cross-platform I am definitely fl@gg3.d. This is why I try to reduce identifiable k3y.words to a minimum using the most basic type of plain text cryp.t. I need help now after 3 1nfect1ng iOS, 1 MacOS, 4 laptops, 2 desktops. Help needed at j.s.m.i.o.u.s.s.e@agi.sky.net (remove all dots in the first part of the mail and the dot between agi and sky)

Now for the update signature fail bug and sorry I cannot upload files since I can’t use the commandline and permissions and denied using the GUI…

@


h3. amnesia@amnesia:~$ sudo apt update && sudo apt upgrade

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for amnesia: 
Get:1 tor+http://sdscoq7snqtznauu.onion/torproject.org stretch InRelease [4,965 B]
Get:2 tor+http://sdscoq7snqtznauu.onion/torproject.org stretch/main amd64 Packages [3,496 B]
Get:3 tor+http://jenw7xbd6tf7vfhp.onion 3.11 InRelease [6,858 B]               
Err:3 tor+http://jenw7xbd6tf7vfhp.onion 3.11 InRelease                         
  The following signatures were invalid: EXPKEYSIG C7988EA7A358D82E deb.tails.boum.org archive signing key
Get:4 tor+http://sgvtcaew4bxjd7ln.onion stretch/updates InRelease [94.3 kB]    
Get:5 tor+http://sgvtcaew4bxjd7ln.onion stretch/updates/main amd64 Packages [487 kB]
Ign:6 tor+http://vwakviie2ienjx6t.onion/debian stretch InRelease               
Get:7 tor+http://vwakviie2ienjx6t.onion/debian sid InRelease [247 kB]          
Get:8 tor+http://sgvtcaew4bxjd7ln.onion stretch/updates/main Translation-en [216 kB]
Get:9 tor+http://sgvtcaew4bxjd7ln.onion stretch/updates/contrib amd64 Packages [1,760 B]
Get:10 tor+http://sgvtcaew4bxjd7ln.onion stretch/updates/contrib Translation-en [1,759 B]
Get:11 tor+http://vwakviie2ienjx6t.onion/debian stretch-backports InRelease [91.8 kB]
Get:12 tor+http://vwakviie2ienjx6t.onion/debian stretch Release [118 kB]       
Get:13 tor+http://vwakviie2ienjx6t.onion/debian sid/main amd64 Packages [8,283 kB]
Get:14 tor+http://vwakviie2ienjx6t.onion/debian sid/main Translation-en [6,320 kB]
Get:15 tor+http://vwakviie2ienjx6t.onion/debian sid/contrib amd64 Packages [60.1 kB]
Get:16 tor+http://vwakviie2ienjx6t.onion/debian sid/contrib Translation-en [50.4 kB]
Get:17 tor+http://vwakviie2ienjx6t.onion/debian stretch-backports/main amd64 Packages [593 kB]
Get:18 tor+http://vwakviie2ienjx6t.onion/debian stretch-backports/main Translation-en [454 kB]
Get:19 tor+http://vwakviie2ienjx6t.onion/debian stretch-backports/contrib amd64 Packages [11.1 kB]
Get:20 tor+http://vwakviie2ienjx6t.onion/debian stretch-backports/contrib Translation-en [7,540 B]
Get:21 tor+http://vwakviie2ienjx6t.onion/debian stretch Release.gpg [2,434 B]  
Get:22 tor+http://vwakviie2ienjx6t.onion/debian stretch/main amd64 Packages [7,082 kB]
Get:23 tor+http://vwakviie2ienjx6t.onion/debian stretch/main Translation-en [5,384 kB]
Get:24 tor+http://vwakviie2ienjx6t.onion/debian stretch/contrib amd64 Packages [50.9 kB]
Get:25 tor+http://vwakviie2ienjx6t.onion/debian stretch/contrib Translation-en [45.9 kB]
Reading package lists... Done                                                  
W: GPG error: tor+http://jenw7xbd6tf7vfhp.onion 3.11 InRelease: The following signatures were invalid: EXPKEYSIG C7988EA7A358D82E deb.tails.boum.org archive signing key
E: The repository 'tor+http://jenw7xbd6tf7vfhp.onion 3.11 InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
amnesia@amnesia:~$@ 

Build information:
3.11 - 20181210 (though I made this key not so long ago… anyway)
aa5bd4c38cc82abd590ba4a956ab441937f6b306
live-build: 3.0.5+really+is+2.0.12-0.tails5
live-boot: 1:20170112
live-config: 5.20170112+deb9u1

I found this nice python script and a more elaborated remote upgrader for people like me following doesn’t seem to be tails programming @ _datasource.py

<code class="python">
from __future__ import absolute_import, print_function

import glob
import logging
import os.path
import re
import shutil
import time

import apt_pkg
from .distinfo import DistInfo
#from apt_pkg import gettext as _


# some global helpers

__all__ = ['is_mirror', 'SourceEntry', 'NullMatcher', 'SourcesList',
           'SourceEntryMatcher']


def is_mirror(master_uri, compare_uri):
    """ check if the given add_url is idential or a mirror of orig_uri e.g.:
        master_uri = archive.ubuntu.com
        compare_uri = de.archive.ubuntu.com
        -> True
    """
    # remove traling spaces and "/"
    compare_uri = compare_uri.rstrip("/ ")
    master_uri = master_uri.rstrip("/ ")
    # uri is identical
    if compare_uri == master_uri:
        #print "Identical"
        return True
    # add uri is a master site and orig_uri has the from "XX.mastersite"
    # (e.g. de.archive.ubuntu.com)
    try:
        compare_srv = compare_uri.split("//")[1]
        master_srv = master_uri.split("//")[1]
        #print "%s == %s " % (add_srv, orig_srv)
    except IndexError:  # ok, somethings wrong here
        #print "IndexError"
        return False
    # remove the leading "<country>." (if any) and see if that helps
    if "." in compare_srv and \
           compare_srv[compare_srv.index(".") + 1:] == master_srv:
        #print "Mirror"
        return True
    return False


def uniq(s):
    """ simple and efficient way to return uniq collection

    This is not intended for use with a SourceList. It is provided
    for internal use only. It does not have a leading underscore to
    not break any old code that uses it; but it should not be used
    in new code (and is not listed in __all__)."""
    return list(set(s))


class SourceEntry(object):
    """ single sources.list entry """

    def __init__(self, line, file=None):
        self.invalid = False         # is the source entry valid
        self.disabled = False        # is it disabled ('#' in front)
        self.type = ""               # what type (deb, deb-src)
        self.architectures = []      # architectures
        self.trusted = None          # Trusted
        self.uri = ""                # base-uri
        self.dist = ""               # distribution (dapper, edgy, etc)
        self.comps = []              # list of available componetns (may empty)
        self.comment = ""            # (optional) comment
        self.line = line             # the original sources.list line
        if file is None:
            file = apt_pkg.config.find_dir(
                "Dir::Etc") + apt_pkg.config.find("Dir::Etc::sourcelist")
        self.file = file             # the file that the entry is located in
        self.parse(line)
        self.template = None         # type DistInfo.Suite
        self.children = []

    def __eq__(self, other):
        """ equal operator for two sources.list entries """
        return (self.disabled == other.disabled and
                self.type == other.type and
                self.uri == other.uri and
                self.dist == other.dist and
                self.comps == other.comps)

    def mysplit(self, line):
        """ a split() implementation that understands the sources.list
            format better and takes [] into account (for e.g. cdroms) """
        line = line.strip()
        pieces = []
        tmp = ""
        # we are inside a [..] block
        p_found = False
        space_found = False
        for i in range(len(line)):
            if line[i] == "[":
                if space_found:
                    space_found = False
                    p_found = True
                    pieces.append(tmp)
                    tmp = line[i]
                else:
                    p_found = True
                    tmp += line[i]
            elif line[i] == "]":
                p_found = False
                tmp += line[i]
            elif space_found and not line[i].isspace():
                # we skip one or more space
                space_found = False
                pieces.append(tmp)
                tmp = line[i]
            elif line[i].isspace() and not p_found:
                # found a whitespace
                space_found = True
            else:
                tmp += line[i]
        # append last piece
        if len(tmp) > 0:
            pieces.append(tmp)
        return pieces

    def parse(self, line):
        """ parse a given sources.list (textual) line and break it up
            into the field we have """
        line = self.line.strip()
        #print line
        # check if the source is enabled/disabled
        if line == "" or line == "#":  # empty line
            self.invalid = True
            return
        if line[0] == "#":
            self.disabled = True
            pieces = line[1:].strip().split()
            # if it looks not like a disabled deb line return
            if not pieces[0] in ("rpm", "rpm-src", "deb", "deb-src"):
                self.invalid = True
                return
            else:
                line = line[1:]
        # check for another "#" in the line (this is treated as a comment)
        i = line.find("#")
        if i > 0:
            self.comment = line[i + 1:]
            line = line[:i]
        # source is ok, split it and see what we have
        pieces = self.mysplit(line)
        # Sanity check
        if len(pieces) < 3:
            self.invalid = True
            return
        # Type, deb or deb-src
        self.type = pieces[0].strip()
        # Sanity check
        if self.type not in ("deb", "deb-src", "rpm", "rpm-src"):
----->            self.invalid = True
            return

        if pieces[1].strip()[0] == "[":
            options = pieces.pop(1).strip("[]").split()
            for option in options:
                try:
                    key, value = option.split("=", 1)
                except Exception:
                    self.invalid = True
                else:
                    if key == "arch":
                        self.architectures = value.split(",")
                    elif key == "trusted":
                        self.trusted = apt_pkg.string_to_bool(value)
                    else:
                        self.invalid = True

        # URI
        self.uri = pieces[1].strip()
        if len(self.uri) < 1:
            self.invalid = True
        # distro and components (optional)
        # Directory or distro
        self.dist = pieces[2].strip()
        if len(pieces) > 3:
            # List of components
            self.comps = pieces[3:]
        else:
            self.comps = []

    def set_enabled(self, new_value):
        """ set a line to enabled or disabled """
        self.disabled = not new_value
        # enable, remove all "#" from the start of the line
        if new_value:
            self.line = self.line.lstrip().lstrip('#')
        else:
            # disabled, add a "#"
            if self.line.strip()[0] != "#":
                self.line = "#" + self.line

    def __str__(self):
        """ debug helper """
        return self.str().strip()

    def str(self):
        """ return the current line as string """
        if self.invalid:
            return self.line
        line = ""
        if self.disabled:
            line = "# "

        line += self.type

        if self.architectures and self.trusted is not None:
            line += " [arch=%s trusted=%s]" % (
                ",".join(self.architectures), "yes" if self.trusted else "no")
        elif self.trusted is not None:
            line += " [trusted=%s]" % ("yes" if self.trusted else "no")
        elif self.architectures:
            line += " [arch=%s]" % ",".join(self.architectures)
        line += " %s %s" % (self.uri, self.dist)
        if len(self.comps) > 0:
            line += " " + " ".join(self.comps)
        if self.comment != "":
            line += " #" + self.comment
        line += "\n"
        return line


class NullMatcher(object):
    """ a Matcher that does nothing """

    def match(self, s):
        return True


class SourcesList(object):
    """ represents the full sources.list + sources.list.d file """

    def __init__(self,
                 withMatcher=True,
                 matcherPath="/usr/share/python-apt/templates/"):
        self.list = []          # the actual SourceEntries Type
        if withMatcher:
            self.matcher = SourceEntryMatcher(matcherPath)
        else:
            self.matcher = NullMatcher()
        self.refresh()

    def refresh(self):
        """ update the list of known entries """
        self.list = []
        # read sources.list
        file = apt_pkg.config.find_file("Dir::Etc::sourcelist")
        self.load(file)
        # read sources.list.d
        partsdir = apt_pkg.config.find_dir("Dir::Etc::sourceparts")
        for file in glob.glob("%s/*.list" % partsdir):
            self.load(file)
        # check if the source item fits a predefined template
        for source in self.list:
            if not source.invalid:
                self.matcher.match(source)

    def __iter__(self):
        """ simple iterator to go over self.list, returns SourceEntry
            types """
        for entry in self.list:
            yield entry

    def __find(self, *predicates, **attrs):
        for source in self.list:
            if (all(getattr(source, key) == attrs[key] for key in attrs) and
                    all(predicate(source) for predicate in predicates)):
                yield source

    def add(self, type, uri, dist, orig_comps, comment="", pos=-1, file=None,
            architectures=[]):
        """
        Add a new source to the sources.list.
        The method will search for existing matching repos and will try to
        reuse them as far as possible
        """

        architectures = set(architectures)
        # create a working copy of the component list so that
        # we can modify it later
        comps = orig_comps[:]
        sources = self.__find(lambda s: set(s.architectures) == architectures,
                              disabled=False, invalid=False, type=type,
                              uri=uri, dist=dist)
        # check if we have this source already in the sources.list
        for source in sources:
            for new_comp in comps:
                if new_comp in source.comps:
                    # we have this component already, delete it
                    # from the new_comps list
                    del comps[comps.index(new_comp)]
                    if len(comps) == 0:
                        return source

        sources = self.__find(lambda s: set(s.architectures) == architectures,
                              invalid=False, type=type, uri=uri, dist=dist)
        for source in sources:
            # if there is a repo with the same (type, uri, dist) just add the
            # components
            if source.disabled and set(source.comps) == set(comps):
                source.disabled = False
                return source
            elif not source.disabled:
                source.comps = uniq(source.comps + comps)
                return source
        # there isn't any matching source, so create a new line and parse it
        line = type
        if architectures:
            line += " [arch=%s]" % ",".join(architectures)
        line += " %s %s" % (uri, dist)
        for c in comps:
            line = line + " " + c
        if comment != "":
            line = "%s #%s\n" % (line, comment)
        line = line + "\n"
        new_entry = SourceEntry(line)
        if file is not None:
            new_entry.file = file
        self.matcher.match(new_entry)
        self.list.insert(pos, new_entry)
        return new_entry

    def remove(self, source_entry):
        """ remove the specified entry from the sources.list """
        self.list.remove(source_entry)

    def restore_backup(self, backup_ext):
        " restore sources.list files based on the backup extension "
        file = apt_pkg.config.find_file("Dir::Etc::sourcelist")
        if os.path.exists(file + backup_ext) and os.path.exists(file):
            shutil.copy(file + backup_ext, file)
        # now sources.list.d
        partsdir = apt_pkg.config.find_dir("Dir::Etc::sourceparts")
        for file in glob.glob("%s/*.list" % partsdir):
            if os.path.exists(file + backup_ext):
                shutil.copy(file + backup_ext, file)

    def backup(self, backup_ext=None):
        """ make a backup of the current source files, if no backup extension
            is given, the current date/time is used (and returned) """
        already_backuped = set()
        if backup_ext is None:
            backup_ext = time.strftime("%y%m%d.%H%M")
        for source in self.list:
            if (source.file not in already_backuped and
                os.path.exists(source.file)):
                shutil.copy(source.file, "%s%s" % (source.file, backup_ext))
        return backup_ext

    def load(self, file):
        """ (re)load the current sources """
        try:
            with open(file, "r") as f:
                for line in f:
                    source = SourceEntry(line, file)
                    self.list.append(source)
        except:
            logging.warning("could not open file '%s'\n" % file)

    def save(self):
        """ save the current sources """
        files = {}
        # write an empty default config file if there aren't any sources
        if len(self.list) == 0:
            path = apt_pkg.config.find_file("Dir::Etc::sourcelist")
            header = (
                "## See sources.list(5) for more information, especialy\n"
                "# Remember that you can only use http, ftp or file URIs\n"
                "# CDROMs are managed through the apt-cdrom tool.\n")

            with open(path, "w") as f:
                f.write(header)
            return

        try:
            for source in self.list:
                if source.file not in files:
                    files[source.file] = open(source.file, "w")
                files[source.file].write(source.str())
        finally:
            for f in files:
                files[f].close()

    def check_for_relations(self, sources_list):
        """get all parent and child channels in the sources list"""
        parents = []
        used_child_templates = {}
        for source in sources_list:
            # try to avoid checking uninterressting sources
            if source.template is None:
                continue
            # set up a dict with all used child templates and corresponding
            # source entries
            if source.template.child:
                key = source.template
                if key not in used_child_templates:
                    used_child_templates[key] = []
                temp = used_child_templates[key]
                temp.append(source)
            else:
                # store each source with children aka. a parent :)
                if len(source.template.children) > 0:
                    parents.append(source)
        #print self.used_child_templates
        #print self.parents
        return (parents, used_child_templates)


class SourceEntryMatcher(object):
    """ matcher class to make a source entry look nice
        lots of predefined matchers to make it i18n/gettext friendly
        """

    def __init__(self, matcherPath):
        self.templates = []
        # Get the human readable channel and comp names from the channel .infos
        spec_files = glob.glob("%s/*.info" % matcherPath)
        for f in spec_files:
            f = os.path.basename(f)
            i = f.find(".info")
            f = f[0:i]
            dist = DistInfo(f, base_dir=matcherPath)
            for template in dist.templates:
                if template.match_uri is not None:
                    self.templates.append(template)
        return

    def match(self, source):
        """Add a matching template to the source"""
        found = False
        for template in self.templates:
            if (re.search(template.match_uri, source.uri) and
                    re.match(template.match_name, source.dist) and
                    # deb is a valid fallback for deb-src (if that is not
                    # definied, see #760035
                    (source.type == template.type or template.type == "deb")):
                found = True
                source.template = template
                break
            elif (template.is_mirror(source.uri) and
                      re.match(template.match_name, source.dist)):
                found = True
                source.template = template
                break
        return found


# some simple tests
if __name__ == "__main__":
    apt_pkg.init_config()
    sources = SourcesList()

    for entry in sources:
        logging.info("entry %s" % entry.str())
        #print entry.uri

    mirror = is_mirror("http://archive.ubuntu.com/ubuntu/",
                       "http://de.archive.ubuntu.com/ubuntu/")
    logging.info("is_mirror(): %s" % mirror)

    logging.info(is_mirror("http://archive.ubuntu.com/ubuntu",
                    "http://de.archive.ubuntu.com/ubuntu/"))
    logging.info(is_mirror("http://archive.ubuntu.com/ubuntu/",
                    "http://de.archive.ubuntu.com/ubuntu"))
</code><

the other one I find really interesting is this one

<pre><code class="python">
from __future__ import division, absolute_import, print_function

import os
import sys
import shutil

_open = open


# Using a class instead of a module-level dictionary
# to reduce the inital 'import numpy' overhead by
# deferring the import of bz2 and gzip until needed

# TODO: .zip support, .tar support?
class _FileOpeners(object):
    """
    Container for different methods to open (un-)compressed files.

    `_FileOpeners` contains a dictionary that holds one method for each
    supported file format. Attribute lookup is implemented in such a way
    that an instance of `_FileOpeners` itself can be indexed with the keys
    of that dictionary. Currently uncompressed files as well as files
    compressed with ``gzip`` or ``bz2`` compression are supported.

    Notes
    -----
    `_file_openers`, an instance of `_FileOpeners`, is made available for
    use in the `_datasource` module.

    Examples
    --------
    >>> np.lib._datasource._file_openers.keys()
    [None, '.bz2', '.gz']
    >>> np.lib._datasource._file_openers['.gz'] is gzip.open
    True

    """

    def __init__(self):
        self._loaded = False
        self._file_openers = {None: open}

    def _load(self):
        if self._loaded:
            return
        try:
            import bz2
            self._file_openers[".bz2"] = bz2.BZ2File
        except ImportError:
            pass
        try:
            import gzip
            self._file_openers[".gz"] = gzip.open
        except ImportError:
            pass
        self._loaded = True

    def keys(self):
        """
        Return the keys of currently supported file openers.

        Parameters
        ----------
        None

        Returns
        -------
        keys : list
            The keys are None for uncompressed files and the file extension
            strings (i.e. ``'.gz'``, ``'.bz2'``) for supported compression
            methods.

        """
        self._load()
        return list(self._file_openers.keys())

    def __getitem__(self, key):
        self._load()
        return self._file_openers[key]

_file_openers = _FileOpeners()

def open(path, mode='r', destpath=os.curdir):
    """
    Open `path` with `mode` and return the file object.

    If ``path`` is an URL, it will be downloaded, stored in the
    `DataSource` `destpath` directory and opened from there.

    Parameters
    ----------
    path : str
        Local file path or URL to open.
    mode : str, optional
        Mode to open `path`. Mode 'r' for reading, 'w' for writing, 'a' to
        append. Available modes depend on the type of object specified by
        path.  Default is 'r'.
    destpath : str, optional
        Path to the directory where the source file gets downloaded to for
        use.  If `destpath` is None, a temporary directory will be created.
        The default path is the current directory.

    Returns
    -------
    out : file object
        The opened file.

    Notes
    -----
    This is a convenience function that instantiates a `DataSource` and
    returns the file object from ``DataSource.open(path)``.

    """

    ds = DataSource(destpath)
    return ds.open(path, mode)


class DataSource (object):
    """
    DataSource(destpath='.')

    A generic data source file (file, http, ftp, ...).

    DataSources can be local files or remote files/URLs.  The files may
    also be compressed or uncompressed. DataSource hides some of the
    low-level details of downloading the file, allowing you to simply pass
    in a valid file path (or URL) and obtain a file object.

    Parameters
    ----------
    destpath : str or None, optional
        Path to the directory where the source file gets downloaded to for
        use.  If `destpath` is None, a temporary directory will be created.
        The default path is the current directory.

    Notes
    -----
    URLs require a scheme string (``http://``) to be used, without it they
    will fail::

        >>> repos = DataSource()
        >>> repos.exists('www.google.com/index.html')
        False
        >>> repos.exists('http://www.google.com/index.html')
        True

    Temporary directories are deleted when the DataSource is deleted.

    Examples
    --------
    ::

        >>> ds = DataSource('/home/guido')
        >>> urlname = 'http://www.google.com/index.html'
        >>> gfile = ds.open('http://www.google.com/index.html')  # remote file
        >>> ds.abspath(urlname)
        '/home/guido/www.google.com/site/index.html'

        >>> ds = DataSource(None)  # use with temporary file
        >>> ds.open('/home/guido/foobar.txt')
        <open file '/home/guido.foobar.txt', mode 'r' at 0x91d4430>
        >>> ds.abspath('/home/guido/foobar.txt')
        '/tmp/tmpy4pgsP/home/guido/foobar.txt'

    """

    def __init__(self, destpath=os.curdir):
        """Create a DataSource with a local path at destpath."""
        if destpath:
            self._destpath = os.path.abspath(destpath)
            self._istmpdest = False
        else:
            import tempfile  # deferring import to improve startup time
            self._destpath = tempfile.mkdtemp()
            self._istmpdest = True

    def __del__(self):
        # Remove temp directories
        if self._istmpdest:
            shutil.rmtree(self._destpath)

    def _iszip(self, filename):
        """Test if the filename is a zip file by looking at the file extension.

        """
        fname, ext = os.path.splitext(filename)
        return ext in _file_openers.keys()

    def _iswritemode(self, mode):
        """Test if the given mode will open a file for writing."""

        # Currently only used to test the bz2 files.
        _writemodes = ("w", "+")
        for c in mode:
            if c in _writemodes:
                return True
        return False

    def _splitzipext(self, filename):
        """Split zip extension from filename and return filename.

        *Returns*:
            base, zip_ext : {tuple}

        """

        if self._iszip(filename):
            return os.path.splitext(filename)
        else:
            return filename, None

    def _possible_names(self, filename):
        """Return a tuple containing compressed filename variations."""
        names = [filename]
        if not self._iszip(filename):
            for zipext in _file_openers.keys():
                if zipext:
                    names.append(filename+zipext)
        return names

    def _isurl(self, path):
        """Test if path is a net location.  Tests the scheme and netloc."""

        # We do this here to reduce the 'import numpy' initial import time.
        if sys.version_info[0] >= 3:
            from urllib.parse import urlparse
        else:
            from urlparse import urlparse

        # BUG : URLs require a scheme string ('http://') to be used.
        #       www.google.com will fail.
        #       Should we prepend the scheme for those that don't have it and
        #       test that also?  Similar to the way we append .gz and test for
        #       for compressed versions of files.

        scheme, netloc, upath, uparams, uquery, ufrag = urlparse(path)
        return bool(scheme and netloc)

    def _cache(self, path):
        """Cache the file specified by path.

        Creates a copy of the file in the datasource cache.

        """
        # We import these here because importing urllib2 is slow and
        # a significant fraction of numpy's total import time.
        if sys.version_info[0] >= 3:
            from urllib.request import urlopen
            from urllib.error import URLError
        else:
            from urllib2 import urlopen
            from urllib2 import URLError

        upath = self.abspath(path)

        # ensure directory exists
        if not os.path.exists(os.path.dirname(upath)):
            os.makedirs(os.path.dirname(upath))

        # TODO: Doesn't handle compressed files!
        if self._isurl(path):
            try:
                openedurl = urlopen(path)
                f = _open(upath, 'wb')
                try:
                    shutil.copyfileobj(openedurl, f)
                finally:
                    f.close()
                    openedurl.close()
            except URLError:
                raise URLError("URL not found: %s" % path)
        else:
            shutil.copyfile(path, upath)
        return upath

    def _findfile(self, path):
        """Searches for ``path`` and returns full path if found.

        If path is an URL, _findfile will cache a local copy and return the
        path to the cached file.  If path is a local file, _findfile will
        return a path to that local file.

        The search will include possible compressed versions of the file
        and return the first occurrence found.

        """

        # Build list of possible local file paths
        if not self._isurl(path):
            # Valid local paths
            filelist = self._possible_names(path)
            # Paths in self._destpath
            filelist += self._possible_names(self.abspath(path))
        else:
            # Cached URLs in self._destpath
            filelist = self._possible_names(self.abspath(path))
            # Remote URLs
            filelist = filelist + self._possible_names(path)

        for name in filelist:
            if self.exists(name):
                if self._isurl(name):
                    name = self._cache(name)
                return name
        return None

    def abspath(self, path):
        """
        Return absolute path of file in the DataSource directory.

        If `path` is an URL, then `abspath` will return either the location
        the file exists locally or the location it would exist when opened
        using the `open` method.

        Parameters
        ----------
        path : str
            Can be a local file or a remote URL.

        Returns
        -------
        out : str
            Complete path, including the `DataSource` destination directory.

        Notes
        -----
        The functionality is based on `os.path.abspath`.

        """
        # We do this here to reduce the 'import numpy' initial import time.
        if sys.version_info[0] >= 3:
            from urllib.parse import urlparse
        else:
            from urlparse import urlparse

        # TODO:  This should be more robust.  Handles case where path includes
        #        the destpath, but not other sub-paths. Failing case:
        #        path = /home/guido/datafile.txt
        #        destpath = /home/alex/
        #        upath = self.abspath(path)
        #        upath == '/home/alex/home/guido/datafile.txt'

        # handle case where path includes self._destpath
        splitpath = path.split(self._destpath, 2)
        if len(splitpath) > 1:
            path = splitpath[1]
        scheme, netloc, upath, uparams, uquery, ufrag = urlparse(path)
        netloc = self._sanitize_relative_path(netloc)
        upath = self._sanitize_relative_path(upath)
        return os.path.join(self._destpath, netloc, upath)

    def _sanitize_relative_path(self, path):
        """Return a sanitised relative path for which
        os.path.abspath(os.path.join(base, path)).startswith(base)
        """
        last = None
        path = os.path.normpath(path)
        while path != last:
            last = path
            # Note: os.path.join treats '/' as os.sep on Windows
            path = path.lstrip(os.sep).lstrip('/')
            path = path.lstrip(os.pardir).lstrip('..')
            drive, path = os.path.splitdrive(path)  # for Windows
        return path

    def exists(self, path):
        """
        Test if path exists.

        Test if `path` exists as (and in this order):

        - a local file.
        - a remote URL that has been downloaded and stored locally in the
          `DataSource` directory.
        - a remote URL that has not been downloaded, but is valid and
          accessible.

        Parameters
        ----------
        path : str
            Can be a local file or a remote URL.

        Returns
        -------
        out : bool
            True if `path` exists.

        Notes
        -----
        When `path` is an URL, `exists` will return True if it's either
        stored locally in the `DataSource` directory, or is a valid remote
        URL.  `DataSource` does not discriminate between the two, the file
        is accessible if it exists in either location.

        """
        # We import this here because importing urllib2 is slow and
        # a significant fraction of numpy's total import time.
        if sys.version_info[0] >= 3:
            from urllib.request import urlopen
            from urllib.error import URLError
        else:
            from urllib2 import urlopen
            from urllib2 import URLError

        # Test local path
        if os.path.exists(path):
            return True

        # Test cached url
        upath = self.abspath(path)
        if os.path.exists(upath):
            return True

        # Test remote url
        if self._isurl(path):
            try:
                netfile = urlopen(path)
                netfile.close()
                del(netfile)
                return True
            except URLError:
                return False
        return False

    def open(self, path, mode='r'):
        """
        Open and return file-like object.

        If `path` is an URL, it will be downloaded, stored in the
        `DataSource` directory and opened from there.

        Parameters
        ----------
        path : str
            Local file path or URL to open.
        mode : {'r', 'w', 'a'}, optional
            Mode to open `path`.  Mode 'r' for reading, 'w' for writing,
            'a' to append. Available modes depend on the type of object
            specified by `path`. Default is 'r'.

        Returns
        -------
        out : file object
            File object.

        """

        # TODO: There is no support for opening a file for writing which
        #       doesn't exist yet (creating a file).  Should there be?

        # TODO: Add a ``subdir`` parameter for specifying the subdirectory
        #       used to store URLs in self._destpath.

        if self._isurl(path) and self._iswritemode(mode):
            raise ValueError("URLs are not writeable")

        # NOTE: _findfile will fail on a new file opened for writing.
        found = self._findfile(path)
        if found:
            _fname, ext = self._splitzipext(found)
            if ext == 'bz2':
                mode.replace("+", "")
            return _file_openers[ext](found, mode=mode)
        else:
            raise IOError("%s not found." % path)


class Repository (DataSource):
    """
    Repository(baseurl, destpath='.')

    A data repository where multiple DataSource's share a base
    URL/directory.

    `Repository` extends `DataSource` by prepending a base URL (or
    directory) to all the files it handles. Use `Repository` when you will
    be working with multiple files from one base URL.  Initialize
    `Repository` with the base URL, then refer to each file by its filename
    only.

    Parameters
    ----------
    baseurl : str
        Path to the local directory or remote location that contains the
        data files.
    destpath : str or None, optional
        Path to the directory where the source file gets downloaded to for
        use.  If `destpath` is None, a temporary directory will be created.
        The default path is the current directory.

    Examples
    --------
    To analyze all files in the repository, do something like this
    (note: this is not self-contained code)::

        >>> repos = np.lib._datasource.Repository('/home/user/data/dir/')
        >>> for filename in filelist:
        ...     fp = repos.open(filename)
        ...     fp.analyze()
        ...     fp.close()

    Similarly you could use a URL for a repository::

        >>> repos = np.lib._datasource.Repository('http://www.xyz.edu/data')

    """

    def __init__(self, baseurl, destpath=os.curdir):
        """Create a Repository with a shared url or directory of baseurl."""
        DataSource.__init__(self, destpath=destpath)
        self._baseurl = baseurl

    def __del__(self):
        DataSource.__del__(self)

    def _fullpath(self, path):
        """Return complete path for path.  Prepends baseurl if necessary."""
        splitpath = path.split(self._baseurl, 2)
        if len(splitpath) == 1:
            result = os.path.join(self._baseurl, path)
        else:
            result = path    # path contains baseurl already
        return result

    def _findfile(self, path):
        """Extend DataSource method to prepend baseurl to ``path``."""
        return DataSource._findfile(self, self._fullpath(path))

    def abspath(self, path):
        """
        Return absolute path of file in the Repository directory.

        If `path` is an URL, then `abspath` will return either the location
        the file exists locally or the location it would exist when opened
        using the `open` method.

        Parameters
        ----------
        path : str
            Can be a local file or a remote URL. This may, but does not
            have to, include the `baseurl` with which the `Repository` was
            initialized.

        Returns
        -------
        out : str
            Complete path, including the `DataSource` destination directory.

        """
        return DataSource.abspath(self, self._fullpath(path))

    def exists(self, path):
        """
        Test if path exists prepending Repository base URL to path.

        Test if `path` exists as (and in this order):

        - a local file.
        - a remote URL that has been downloaded and stored locally in the
          `DataSource` directory.
        - a remote URL that has not been downloaded, but is valid and
          accessible.

        Parameters
        ----------
        path : str
            Can be a local file or a remote URL. This may, but does not
            have to, include the `baseurl` with which the `Repository` was
            initialized.

        Returns
        -------
        out : bool
            True if `path` exists.

        Notes
        -----
        When `path` is an URL, `exists` will return True if it's either
        stored locally in the `DataSource` directory, or is a valid remote
        URL.  `DataSource` does not discriminate between the two, the file
        is accessible if it exists in either location.

        """
        return DataSource.exists(self, self._fullpath(path))

    def open(self, path, mode='r'):
        """
        Open and return file-like object prepending Repository base URL.

        If `path` is an URL, it will be downloaded, stored in the
        DataSource directory and opened from there.

        Parameters
        ----------
        path : str
            Local file path or URL to open. This may, but does not have to,
            include the `baseurl` with which the `Repository` was
            initialized.
        mode : {'r', 'w', 'a'}, optional
            Mode to open `path`.  Mode 'r' for reading, 'w' for writing,
            'a' to append. Available modes depend on the type of object
            specified by `path`. Default is 'r'.

        Returns
        -------
        out : file object
            File object.

        """
        return DataSource.open(self, self._fullpath(path), mode)

    def listdir(self):
        """
        List files in the source Repository.

        Returns
        -------
        files : list of str
            List of file names (not containing a directory part).

        Notes
        -----
        Does not currently work for remote repositories.

        """
        if self._isurl(self._baseurl):
            raise NotImplementedError(
                  "Directory listing of URLs, not supported yet.")
        else:
            return os.listdir(self._baseurl)

</code>

Feature Branch: Help needed

Original created by @g00gl3vil on 16682 (Redmine)

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information