This study aimed to evaluate the quality and accuracy of responses provided by two user-interactive AI chatbots, namely ChatGPT and ChatSonic, in response to patient queries regarding laparoscopic repair of inguinal hernias, and additionally determine the suitability of these chatbots in addressing patient queries related to inguinal hernia repair. Ten questions regarding laparoscopic repair of inguinal hernias were developed and presented to ChatGPT 4.0 and ChatSonic. Responses were evaluated by two experienced surgeons blinded to the source, using the Global Quality Score (GQS) and modified DISCERN Score to gauge response quality and reliability. ChatGPT demonstrated high-quality responses (GQS = 4 & 5) for all ten questions according to one evaluator, and for seven out of ten questions according to the other. Similarly, ChatGPT showed high reliability (DISCERN = 4 & 5) for nine responses according to one evaluator, and for three responses according to the other, with only slight agreement between evaluators for both GQS (kappa = 0.20) and modified DISCERN scores (kappa = 0.08). ChatSonic also provided high-quality and reliable responses for a majority of questions, albeit to a lesser extent than ChatGPT, and both demonstrating limited concordance in responses (p > 0.05). Overall, Both ChatGPT and ChatSonic demonstrated potential utility in providing responses to patient queries about hernia surgery. However, due to inconsistencies in reliability and quality, ongoing refinement and validation of AI generated medical information remain necessary before widespread clinical adoption.