: "Index of ..." directory's files listing On my courses we've got homework on site in folders such as: http://example.com/files/tasks1-edc34rtgfds http://example.com/files/tasks2-0bg454fgerg http://example.com/files/tasks3-h1dlkjiojo8
On my courses we've got homework on site in folders such as:
example.com/files/tasks1-edc34rtgfds http://example.com/files/tasks2-0bg454fgerg example.com/files/tasks3-h1dlkjiojo8 ...
Each tasksi-xxxxxxxxxxx is a folder with 11 random characters at the end. And when you view the above URLs in browser you can see Index of /tasksi-xxxxxxxxx with all the files in that folder.
When you view example.com/files/ you can see only empty html with words "Hello, world".
The problem is that you can't look into the next task without knowledge of its URL. So for example we've got the URLs for tasks1 and tasks2, and we can't guess what tasks3 URL will be (as we need to know the 11 random characters at the end)
How can I get the list of all directories? (Is there a way to type something like example.com/files/task1-aflafjal343/..? or another way?) I want to see all upcoming homework tasks.
More posts by @Carla537
3 Comments
Sorted by latest first Latest Oldest Best
I would view the source code of a current task and see what you can learn from file and folder paths and comments in the HTML etc. But what if the teachers have not created any more tasks than what you got for the current day week or month?
Have you tried decoding the string of 11 random characters? Try searching for a base64 decoder, or HTML string decoder. 11 chars seems short for MD5 but why not give that a god and run the strings the an MD5 decoder.
Lastly have you asked your teacher for access to the future course work?
If you're trying to skip ahead and view your homework assignments beforehand, I don't think there's any way to do this short of accessing the server via SSH, SFTP, etc. and looking in that directory. Other ways to circumvent this sort of "security" is by looking for information leakage:
Search engines are quite good at seeking out hidden directories by virtue of their web crawlers.
Server logs are sometimes accidentally left publicly accessible, which can also give away such information.
Error messages are another common source of information leakage, which can tell you about directory structure.
Otherwise, you'll have to resort to more extreme measures:
Brute force search
Brute-forcing the directory names using a script to flood the server with requests until you get a 200 OK reply.
However, short of finding out what kind of hash function is being used or a vulnerability in the PRNG used, you'll have about 36^11 or 1.31621704E17 combinations to try.
And, given that a typical web server can only handle anywhere from 10-20 requests per second to 70K requests per second on the very high end, you'd be looking at 59,584.7029 years to exhaust the search space—that's per directory.
Now, maybe they're using multiple load balanced servers, but even then your homework probably isn't hosted on the fastest servers in the world. And hammering the web server at those speeds will probably raise a few red flags both at your ISP and at the web host, even if you're using many random proxies to send the requests through.
Hack the web server
You could use a vulnerability scanner and/or fuzzer to find an exploitable bug in the system and gain access to data that way. Who knows, maybe they're using an old unpatched version of IIS or Apache, or they're running web software that has a known SQL injection or file upload vulnerability, or their web server is misconfigured and you can use WebDAV to get a directory listing. If you can access the user account under which the files are hosted or run a backdoor script under that user, then you'll be able to see the full directory contents.
I suppose it all just depends on how badly you want to get an early look at your homework assignments. But I'm guessing it's probably less effort to sneak into your professor's office and just look up the assignments on his computer.
And there's always looking at last year's assignments for the same course, which are bound to be pretty close to what you'll have this year.
I think you've got an index.html or index.htm or php, maybe even "home.html" in that folder (which is the "Hello world") and you need to delete it. The index listing is generated by your (Apache?) with mod_autoindex, which only runs when there is no index page.
Now to scan for the other files: try using HTTRACK which is a personal web-spider. It's used to copy websites, and is relatively fast.
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.