Discussion of the Product's Architecture and Tooling (MCP vs. Custom)
A significant portion of the conversation revolves around the product's technical architecture, specifically its decision not to use the "MCP" (presumably a framework for agent-tool integration) and instead to build custom in-house tools and a single-agent architecture.
-
Skepticism about not using MCP: Several users questioned the decision to forego MCP, arguing that it could lead to missed ecosystem benefits and the need to reinvent existing functionalities.
- "But you are compatible with MCP, right? Otherwise users are going to miss out on the MCP ecosystem. And you are going to be spending all your time developing your own versions of MCP plugins. Wouldn't it be easier to improve the existing ones?" asked esafak.
- bfeynman expressed confusion, stating, "that just sounds like you have no idea what MCP is, I don't even like MCPs but I can't even understand what angle you are coming from unless they specifically mean using external MCPs instead of your own, since it is you know open source..."
- digitcatphd voiced concern that foundation models might eventually incorporate similar functionality, potentially disrupting products that don't integrate with these broader ecosystems: "I would argue Dropbox was a new product category rather than a feature and as such, was a much deeper strategic decision to enter that category than add a feature. My only recommendation would be to focus on deep complex workflows (E.g. N8N style) with extensive integrations or build out a developer community so you can build some data lock in, because if they are surface level templates surely these will get easily disrupted."
-
Proprietary Approach Rationale: The product team defended their custom approach, emphasizing the quality and specific capabilities it enables.
- hgaddipa001 explained their reasoning: "Find that the quality of them currently aren't there yet for a general system. They tend to be designed just to use that singular app instead of to be used in parallel with other apps."
- Later, hgaddipa001 stated, "It's a bit more complicated. We have a full custom single agent architecture, sort of like Manus that isn't fully compatible with MCP."
- The benefit of their custom tools was highlighted: "For example we can read and attach pdfs to gmail which not a lot of people can, since we have our own internal storage api," said hgaddipa001.
- Addressing the indexing claims, hgaddipa001 countered bfeynman by asking, "Why wouldn't we be indexing at scale?" and later suggesting their indexing is "similar to glean (but a bit less elegant without the ACLs)."
Concerns Regarding Data Privacy and Security
A prevalent theme is user apprehension about granting the product access to sensitive personal accounts, particularly email and Google Drive, and the potential security and privacy implications.
-
General Hesitation and Mistrust: Many users expressed immediate discomfort with the product accessing core personal data.
- "Jayakumark asked, "Looks nice, but little hesitant to give access to emails. What model is being used on backend ?"
- soniczentropy declared, "This is horrifying. Everyone should be horrified."
- brazukadev echoed this sentiment, stating, "So you have access to the users Gmail, not 'the agent'."
-
Security Vulnerabilities and Developer Awareness: Users raised concerns about the inherent risks of AI agents interacting with personal accounts, especially given the potential for prompt injection attacks and the developers' perceived lack of deep security expertise.
- milkshakes voiced strong reservations: "you're building a tool that is designed to sink its tentacles into peoples' most personal accounts and take unsupervised automated actions with them, using a technology that has serious, well known, documented security issues. you haven't demonstrated any experience with, awareness of, or consideration for the security issues at hand, so the ideal amount of code to share would likely be all of it."
- The same user later elaborated on the security standard: "with respect to user security and privacy, doing your best is not much better than yolo security. the minimum standard should be to research the threat landscape, study the state of the art in methods to mitigate those threats, implement them, and test them thoroughly, yourselves and through vendors. iterate through that process continuously, alongside your development. it will never end. or, you can open source it and the internet does this for you for free."
- tehsuk stated, "Anything that gives any sort of system access to sensitive data and lets agents carry out actions on basically unchecked input sounds like a complete security and privacy nightmare by design."
- amonks raised critical questions about prompt injection vulnerabilities: "It seems like this collection of tools gives you a ton of lethal-trifecta risk for prompt injection attacks. How have you mitigated this—are you doing something like CaMeL?"
- The response from hgaddipa001, "We do a lot of processing on our backend to prevent against prompt injection, but there definitely still is some risk. We can do better on as is always the case. Need to read up on how CaMel does it. Do you have any good links?", was met with alarm: "That’s a pretty scary answer, to be honest," remarked amonks.
-
Clarification on Access and Data Handling: The developers attempted to clarify their data access model, explaining the use of OAuth and separate agent execution.
- hgaddipa001 initially stated, "We don't have access to any of your data," which drew immediate pushback: "How do you not have access to the data if I give you access to my email?" asked stavros.
- The developers clarified: "The agent does! We don't, and agent pulls in data only when executing queries."
- However, further questioning revealed a more direct involvement: "Yeah we store our user credentials on our side and manage them. Along with refreshing tokens and so forth," admitted hgaddipa001, leading to a corrective statement: "So you do have access to all the data. It's not really a great look if you're lying about what you have access to, and this is a technical audience, it's not like we don't know how agents work," said stavros.
- hgaddipa001 later explained, "We use Oauth, so it's easy for a user to disconnect."
- The developers also reassured about limitations: "Ohh we don't give it computer use access or anything like that. We inject tokens post tool call, so to protect users from the agent doing anything malicious."
Legality and Ethics of Web Scraping
The discussion also touched upon the legal and ethical considerations of scraping data, particularly from LinkedIn, using third-party vendors.
-
Questions about LinkedIn Scraping: Users questioned the legality of scraping LinkedIn, referencing
robots.txt
and LinkedIn's potential to block such activity.- "nikolayasdf123 questioned, "is this legal? last time I checked linkedin.com/robots.txt do not allow scraping, unless explicit approval from linkedin."
- mritchie712 asked, "How does the scraper work? e.g. LinkedIn aggressively blocks scraping and you'd need to be logged in to see most things you'd care about. How do you handle that?"
-
Vague Due Diligence and Legal Interpretations: The developers' reliance on third-party data vendors and their assumptions about explicit approval raised concerns about due diligence. The legal nuances of web scraping were also debated.
- hgaddipa001 stated, "We get our data from third party data vendors who we assume have gotten explicit approval from linkedin!"
- scblock replied, "You assume! Such due diligence!"
- breadwinner offered a contrasting legal opinion: "If it is publicly available information it is legal to scrape it, regardless of what robots.txt says."
- otterley, identifying as an attorney, provided a more cautious perspective: "As an attorney (and this is not legal advice), I don't think it's quite that simple. The court held that the CFAA does not proscribe scraping of pages to which the user already has access and in a way that doesn't harm the service, and thus it's not a crime. But there are other mechanisms that might impact a scraper, such as civil liability, that have not been addressed uniformly by the courts yet." They also advised, "If you want a thorough analysis of legal risk--either for your business or for personal matters--hire a good lawyer."
The Changing Landscape of Hacker News Launches
Several comments reflected a sentiment that Hacker News launches, particularly in the AI space, have become generic and less exciting, with a focus on quick iteration and "shitcoin"-like behavior.
-
Criticism of AI Launch Trends: A recurring theme was the perception that many AI product launches on HN are similar and lack genuine innovation.
- brazukadev expressed dissatisfaction: "Honestly, what have HN become? These AI projects are looking more and more like shitcoins and their creators are shitcoin shillers."
- Braizukadev further elaborated: "Not really and this is totally not related to Slashy, it just look like the same as the other 20 Slashys launched last month. Launch HNs used to be exciting. Maybe HN/ycombinator is just not interesting anymore. I saw some of you commenting that this might be similar to the famous Dropbox situation. That could not be more delusional and representative of what HN became, a meme of itself."
- EMM_386 agreed: "The strategy is throw a little bit of money at everything, hope one of them will become a unicorn, everyone gets richer. Rinse and repeat. You're right though ... these YC batches are not what they used to be. AI is hot right now, so it seems YC is throwing money at anything that seems like it can at least actually do something (not that it is necessarily good)."
-
Developer Response to Criticism: The product team acknowledged these criticisms and tried to highlight their own unique aspects.
- hgaddipa001 responded to the "shitcoin" comment: "Have you tried out Slashy? What makes you say that."
- Later, hgaddipa001 reflected: "Hmm that's fair, we're definitely not the most exciting launch out there compared to others in our batch. I'd like to think the fact we do what we promise is exciting, but without trying the product hard to convey that well :)"
Product Reception and Potential Value
Despite the significant concerns raised, some users expressed positive sentiment and recognized the potential utility of the product.
-
Positive Feedback: Some users found the product concept appealing and highlighted specific features they liked.
- namanyayg complimented the team: "Slashy is great and the founders are so talented. I've been following Pranjali on Twitter for a while -- they've got great weekly videos where they keep releasing new features. The team ships fast and I'm excited to see where they go."
- HeadphoneJunkie stated, "This is quite useful where has this been all my life. Email drafting is decent since it reads my drive, previous emails, and everything else so it has a good bit of context."
- milkshakes, despite strong security concerns, also offered encouragement: "i actually really like your product for what it's worth. don't listen to the haters. hackers build things... you clearly work hard and care deeply about what you are building, and it will be very useful."
-
Suggestions for Improvement and Future Development: Users offered ideas for enhancing the product and addressing its limitations.
- felarof suggested: "You should make Slashy a chrome extension for BrowserOS (https://github.com/browseros-ai/BrowserOS), then it can read/extract Linkedin using user credentials :)"
- hgaddipa001 responded positively: "Hmm we've been considering a chrome extension."
- BrandiATMuhkuh asked about the indexing process for semantic search and discussed scalability for large data volumes, indicating interest and a desire for more detailed information.
- dcsan inquired about the competitive landscape with other AI browser agents, asking, "Do you worry that AI browser agents (comet etc) will eat this market of light integrations?" The product team responded that they focus on "pretty deep integrations (we include semantic search as well as user action graphs)" and that APIs are superior to browser automation.